Science.gov

Sample records for 3d imaging systems

  1. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  2. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  3. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  4. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  5. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  6. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  7. Laboratory 3D Micro-XRF/Micro-CT Imaging System

    NASA Astrophysics Data System (ADS)

    Bruyndonckx, P.; Sasov, A.; Liu, X.

    2011-09-01

    A prototype micro-XRF laboratory system based on pinhole imaging was developed to produce 3D elemental maps. The fluorescence x-rays are detected by a deep-depleted CCD camera operating in photon-counting mode. A charge-clustering algorithm, together with dynamically adjusted exposure times, ensures a correct energy measurement. The XRF component has a spatial resolution of 70 μm and an energy resolution of 180 eV at 6.4 keV. The system is augmented by a micro-CT imaging modality. This is used for attenuation correction of the XRF images and to co-register features in the 3D XRF images with morphological structures visible in the volumetric CT images of the object.

  8. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  9. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  10. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  11. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  12. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  13. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  14. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  15. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  16. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  17. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  18. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  19. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  20. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  1. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  2. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  3. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  4. Mechanically assisted 3D prostate ultrasound imaging and biopsy needle-guidance system

    NASA Astrophysics Data System (ADS)

    Bax, Jeffrey; Williams, Jackie; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Karnik, Vaishali; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

    2010-02-01

    Prostate biopsy procedures are currently limited to using 2D transrectal ultrasound (TRUS) imaging to guide the biopsy needle. Being limited to 2D causes ambiguity in needle guidance and provides an insufficient record to allow guidance to the same suspicious locations or avoid regions that are negative during previous biopsy sessions. We have developed a mechanically assisted 3D ultrasound imaging and needle tracking system, which supports a commercially available TRUS probe and integrated needle guide for prostate biopsy. The mechanical device is fixed to a cart and the mechanical tracking linkage allows its joints to be manually manipulated while fully supporting the weight of the ultrasound probe. The computer interface is provided in order to track the needle trajectory and display its path on a corresponding 3D TRUS image, allowing the physician to aim the needle-guide at predefined targets within the prostate. The system has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe in order to generate 3D image for 3D navigation. Using the system, 3D TRUS prostate images can be generated in approximately 10 seconds. The system reduces most of the user variability from conventional hand-held probes, which make them unsuitable for precision biopsy, while preserving some of the user familiarity and procedural workflow. In this paper, we describe the 3D TRUS guided biopsy system and report on the initial clinical use of this system for prostate biopsy.

  5. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate

    PubMed Central

    Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

    2012-01-01

    Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients. PMID:22708023

  6. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  7. The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system

    NASA Astrophysics Data System (ADS)

    Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

    2014-05-01

    An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

  8. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  9. Quantitative wound healing measurement and monitoring system based on an innovative 3D imaging system

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Yang, Arthur; Yin, Gongjie; Wen, James

    2011-03-01

    In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess, document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices, physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma, local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld device; and (f) Reasonable cost (< $1,000).

  10. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  11. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  12. Note: An improved 3D imaging system for electron-electron coincidence measurements

    NASA Astrophysics Data System (ADS)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-01

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  13. A web-based 3D medical image collaborative processing system with videoconference

    NASA Astrophysics Data System (ADS)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  14. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  15. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  16. A framework for human spine imaging using a freehand 3D ultrasound system.

    PubMed

    Purnama, Ketut E; Wilkinson, Michael H F; Veldhuizen, Albert G; van Ooijen, Peter M A; Lubbers, Jaap; Burgerhof, Johannes G M; Sardjono, Tri A; Verkerke, Gijbertus J

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis which sometimes may develop rapidly. Furthermore, 3D ultrasound imaging provides information in 3D directly in contrast to projection methods. This paper describes a feasibility study of an ultrasound system to provide a 3D image of the human spine, and presents a framework of procedures to perform this task. The framework consist of an ultrasound image acquisition procedure to image a large part of the human spine by means of a freehand 3D ultrasound system and a volume reconstruction procedure which was performed in four stages: bin-filling, hole-filling, volume segment alignment, and volume segment compounding. The overall results of the procedures in this framework show that imaging of the human spine using ultrasound is feasible. Vertebral parts such as the transverse processes, laminae, superior articular processes, and spinous process of the vertebrae appear as clouds of voxels having intensities higher than the surrounding voxels. In sagittal slices, a string of transverse processes appears representing the curvature of the spine. In the bin-filling stage the estimated mean absolute noise level of a single measurement of a single voxel was determined. Our comparative study for the hole-filling methods based on rank sum statistics proved that the pixel nearest neighbour (PNN) method with variable radius and with the proposed olympic operation is the best method. Its mean absolute grey value error was less in magnitude than the noise level of a single measurement.

  17. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    SciTech Connect

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  18. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  19. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  20. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  1. 3D Image Acquisition System Based on Shape from Focus Technique

    PubMed Central

    Billiot, Bastien; Cointault, Frédéric; Journaux, Ludovic; Simon, Jean-Claude; Gouton, Pierre

    2013-01-01

    This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene. PMID:23591964

  2. Automatic 3D power line reconstruction of multi-angular imaging power line inspection system

    NASA Astrophysics Data System (ADS)

    Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei

    2007-06-01

    We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.

  3. 3-D ultrasonic strain imaging based on a linear scanning system.

    PubMed

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  4. A preliminary evaluation work on a 3D ultrasound imaging system for 2D array transducer

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaoli; Li, Xu; Yang, Jiali; Li, Chunyu; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a preliminary evaluation work on a pre-designed 3-D ultrasound imaging system. The system mainly consists of four parts, a 7.5MHz, 24×24 2-D array transducer, the transmit/receive circuit, power supply, data acquisition and real-time imaging module. The row-column addressing scheme is adopted for the transducer fabrication, which greatly reduces the number of active channels . The element area of the transducer is 4.6mm by 4.6mm. Four kinds of tests were carried out to evaluate the imaging performance, including the penetration depth range, axial and lateral resolution, positioning accuracy and 3-D imaging frame rate. Several strong reflection metal objects , fixed in a water tank, were selected for the purpose of imaging due to a low signal-to-noise ratio of the transducer. The distance between the transducer and the tested objects , the thickness of aluminum, and the seam width of the aluminum sheet were measured by a calibrated micrometer to evaluate the penetration depth, the axial and lateral resolution, respectively. The experiment al results showed that the imaging penetration depth range was from 1.0cm to 6.2cm, the axial and lateral resolution were 0.32mm and 1.37mm respectively, the imaging speed was up to 27 frames per second and the positioning accuracy was 9.2%.

  5. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  6. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time. PMID:26954841

  7. A small animal image guided irradiation system study using 3D dosimeters

    NASA Astrophysics Data System (ADS)

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  8. Noise analysis for near field 3-D FM-CW radar imaging systems

    SciTech Connect

    Sheen, David M.

    2015-06-19

    Near field radar imaging systems are used for several applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit the performance in several ways including reduction in system sensitivity and reduction of image dynamic range. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of these noise sources on a fast-chirping FM-CW system.

  9. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  10. A 3D- and 4D-ESR imaging system for small animals.

    PubMed

    Oikawa, K; Ogata, T; Togashi, H; Yokoyama, H; Ohya-Nishiguchi, H; Kamada, H

    1996-01-01

    A new version of in vivo ESR-CT system composed of custom-made 0.7 GHz ESR spectrometer, air-core magnet with a field-scanning coil, three field-gradient coils, and two computers enables up- and down-field, and rapid magnetic-field scanning linearly controlled by computer. 3D-pictures of distribution of nitroxide radicals injected in brains and livers of rats and mice were obtained in 1.5 min with resolution of 1 mm. We have also succeeded in obtaining spatial-time imagings of the animals.

  11. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  12. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR

    SciTech Connect

    Yun, G. S. Choi, M. J.; Lee, J.; Kim, M.; Leem, J.; Nam, Y.; Choe, G. H.; Lee, W.; Park, H. K.; Park, H.; Woo, D. S.; Kim, K. W.; Domier, C. W.; Luhmann, N. C.; Ito, N.; Mase, A.; Lee, S. G.

    2014-11-15

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B{sub 0} = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  13. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  14. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  15. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

  16. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  17. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  18. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  19. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  20. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  1. An efficient topology adaptation system for parametric active contour segmentation of 3D images

    NASA Astrophysics Data System (ADS)

    Abhau, Jochen; Scherzer, Otmar

    2008-03-01

    Active contour models have already been used succesfully for segmentation of organs from medical images in 3D. In implicit models, the contour is given as the isosurface of a scalar function, and therefore topology adaptations are handled naturally during a contour evolution. Nevertheless, explicit or parametric models are often preferred since user interaction and special geometric constraints are usually easier to incorporate. Although many researchers have studied topology adaptation algorithms in explicit mesh evolutions, no stable algorithm is known for interactive applications. In this paper, we present a topology adaptation system, which consists of two novel ingredients: A spatial hashing technique is used to detect self-colliding triangles of the mesh whose expected running time is linear with respect to the number of mesh vertices. For the topology change procedure, we have developed formulas by homology theory. During a contour evolution, we just have to choose between a few possible mesh retriangulations by local triangle-triangle intersection tests. Our algorithm has several advantages compared to existing ones: Since the new algorithm does not require any global mesh reparametrizations, it is very efficient. Since the topology adaptation system does not require constant sampling density of the mesh vertices nor especially smooth meshes, mesh evolution steps can be performed in a stable way with a rather coarse mesh. We apply our algorithm to 3D ultrasonic data, showing that accurate segmentation is obtained in some seconds.

  2. Noise analysis for near-field 3D FM-CW radar imaging systems

    NASA Astrophysics Data System (ADS)

    Sheen, David M.

    2015-05-01

    Near field radar imaging systems are used for demanding security applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit performance in several ways. Practical imaging systems can employ arrays with low gain antennas and relatively large signal distribution networks that have substantial losses which limit transmit power and increase the effective noise figure of the receiver chain, resulting in substantial thermal noise. Phase noise can also limit system performance. The signal coupled from transmitter to receiver is much larger than expected target signals. Phase noise from this coupled signal can set the system noise floor if the oscillator is too noisy. Frequency modulated continuous wave (FM-CW) radar transceivers used in short range systems are relatively immune to the effects of the coupled phase noise due to range correlation effects. This effect can reduce the phase-noise floor such that it is below the thermal noise floor for moderate performance oscillators. Phase noise is also manifested in the range response around bright targets, and can cause smaller targets to be obscured. Noise in synthetic aperture imaging systems is mitigated by the processing gain of the system. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of these noise sources on a fast-chirping FM-CW system.

  3. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  4. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  5. Gothic Churches in Paris ST Gervais et ST Protais Image Matching 3d Reconstruction to Understand the Vaults System Geometry

    NASA Astrophysics Data System (ADS)

    Capone, M.; Campi, M.; Catuogno, R.

    2015-02-01

    This paper is part of a research about ribbed vaults systems in French Gothic Cathedrals. Our goal is to compare some different gothic cathedrals to understand the complex geometry of the ribbed vaults. The survey isn't the main objective but it is the way to verify the theoretical hypotheses about geometric configuration of the flamboyant churches in Paris. The survey method's choice generally depends on the goal; in this case we had to study many churches in a short time, so we chose 3D reconstruction method based on image dense stereo matching. This method allowed us to obtain the necessary information to our study without bringing special equipment, such as the laser scanner. The goal of this paper is to test image matching 3D reconstruction method in relation to some particular study cases and to show the benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  6. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  7. Moving-Article X-Ray Imaging System and Method for 3-D Image Generation

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth R. (Inventor)

    2012-01-01

    An x-ray imaging system and method for a moving article are provided for an article moved along a linear direction of travel while the article is exposed to non-overlapping x-ray beams. A plurality of parallel linear sensor arrays are disposed in the x-ray beams after they pass through the article. More specifically, a first half of the plurality are disposed in a first of the x-ray beams while a second half of the plurality are disposed in a second of the x-ray beams. Each of the parallel linear sensor arrays is oriented perpendicular to the linear direction of travel. Each of the parallel linear sensor arrays in the first half is matched to a corresponding one of the parallel linear sensor arrays in the second half in terms of an angular position in the first of the x-ray beams and the second of the x-ray beams, respectively.

  8. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  9. High-speed biometrics ultrasonic system for 3D fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Maev, Roman G.; Severin, Fedar

    2012-10-01

    The objective of this research is to develop a new robust fingerprint identification technology based upon forming surface-subsurface (under skin) ultrasonic 3D images of the finger pads. The presented work aims to create specialized ultrasonic scanning methods for biometric purposes. Preliminary research has demonstrated the applicability of acoustic microscopy for fingerprint reading. The additional information from internal skin layers and dermis structures contained in the scan can essentially improve confidence in the identification. Advantages of this system include high resolution and quick scanning time. Operating in pulse-echo mode provides spatial resolution up to 0.05 mm. Technology advantages of the proposed technology are the following: • Full-range scanning of the fingerprint area "nail to nail" (2.5 x 2.5 cm) can be done in less than 5 sec with a resolution of up to 1000 dpi. • Collection of information about the in-depth structure of the fingerprint realized by the set of spherically focused 50 MHz acoustic lens provide the resolution ~ 0.05 mm or better • In addition to fingerprints, this technology can identify sweat porous at the surface and under the skin • No sensitivity to the contamination of the finger's surface • Detection of blood velocity using Doppler effect can be implemented to distinguish living specimens • Utilization as polygraph device • Simple connectivity to fingerprint databases obtained with other techniques • The digitally interpolated images can then be enhanced allowing for greater resolution • Method can be applied to fingernails and underlying tissues, providing more information • A laboratory prototype of the biometrics system based on these described principles was designed, built and tested. It is the first step toward a practical implementation of this technique.

  10. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  11. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  12. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    PubMed Central

    2014-01-01

    Purpose: The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. Methods: To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. Results: In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Conclusion: Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs. PMID:25038809

  13. The application of 3D image processing to studies of the musculoskeletal system

    NASA Astrophysics Data System (ADS)

    Hirsch, Bruce Elliot; Udupa, Jayaram K.; Siegler, Sorin; Winkelstein, Beth A.

    2009-10-01

    Three dimensional renditions of anatomical structures are commonly used to improve visualization, surgical planning, and patient education. However, such 3D images also contain information which is not readily apparent, and which can be mined to elucidate, for example, such parameters as joint kinematics, spacial relationships, and distortions of those relationships with movement. Here we describe two series of experiments which demonstrate the functional application of 3D imaging. The first concerns the joints of the ankle complex, where the usual description of motions in the talocrural joint is shown to be incomplete, and where the roles of the anterior talofibular and calcaneofibular ligaments are clarified in ankle sprains. Also, the biomechanical effects of two common surgical procedures for repairing torn ligaments were examined. The second series of experiments explores changes in the anatomical relationships between nerve elements and the cervical vertebrae with changes in neck position. They provide preliminary evidence that morphological differences may exist between asymptomatic subjects and patients with radiculopathy in certain positions, even when conventional imaging shows no difference.

  14. A model and simulation to predict the performance of angle-angle-range 3D flash LADAR imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2005-10-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  15. A model and simulation to predict the performance of angle-angle-range 3D flash ladar imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2004-11-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  16. Evaluation of radiotherapy setup accuracy for head and neck cancer using a 3-D surface imaging system

    NASA Astrophysics Data System (ADS)

    Cho, H.-L.; Park, E.-T.; Kim, J.-Y.; Kwak, K.-S.; Kim, C.-J.; Ahn, K.-J.; Suh, T.-S.; Lee, Y.-K.; Kim, S.-W.; Kim, J.-K.; Lim, S.; Choi, Y.-M.; Park, S.-K.

    2013-11-01

    The purpose of this study was to measure the accuracy of a three-dimensional surface imaging system (3-D SIS) in comparison to a 3-laser system by analyzing the setup errors obtained from a RANDO Phantom and head and neck cancer patients. The 3-D SIS used for the evaluation of the setup errors was a C-RAD Sentinel. In the phantom study, the OBI setup errors without the thermoplastic mask of the 3-laser system vs. the 3-D SIS were measured. Furthermore, the setup errors with the thermoplastic mask of the 3-laser system vs. the 3-D SIS were measured. After comparison of the CBCT, setup correction about 1 mm was performed in a few cases. The probability of the error without the thermoplastic mask exceeding 1 mm in the 3-laser system vs. the 3-D SIS was 75.00% vs. 35.00% on the X-axis, 80.00% vs. 40.00% on the Y-axis, and 80.00% vs. 65.00% on the Z-axis. Moreover, the probability of the error with the thermoplastic mask exceeding 1 mm in the 3-laser system vs. the 3-D SIS was 70.00% vs. 15.00% on the X-axis, 75.00% vs. 25.00% on the Y-axis, and 70.00% vs. 35.00% on the Z-axis. These results showed that the 3-D SIS has a lower probability of setup error than the 3-laser system for the phantom. For the patients, the setup errors of the 3-laser system vs. the 3-D SIS were measured. The probability of the error exceeding more than 1 mm in the 3-laser system vs. the 3-D SIS was shown to be 81.82% vs. 36.36% on the X-axis, 81.82% vs. 45.45% on the Y-axis, and 86.36% vs. 72.73% on the Z-axis. As a result, the 3-D SIS also exhibited a lower probability of setup error for the cancer patients. Therefore, this study confirmed that the 3-D SIS is a promising method for setup verification.

  17. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  18. Integrated 3D macro-trapping and light-sheet imaging system

    NASA Astrophysics Data System (ADS)

    Yang, Zhengyi; Piksarv, Peeter; Ferrier, David E. K.; Gunn-Moore, Frank J.; Dholakia, Kishan

    2015-08-01

    Biological research requires high-speed and low-damage imaging techniques for live specimens in areas such as development study in embryos. Light sheet microscopy provides fast imaging speed whilst keeps the photo-damage and photo-blenching to minimum. Conventional sample embedding methods in light sheet imaging involves using agent such as agarose which potentially affects the behavior and the develop pattern of the specimens. Here we demonstrate integrating dual-beam trapping method into light sheet imaging system to confine and translate the specimen whilst light sheet images are taken. Tobacco plant cells as well as Spirobranchus lamarcki larva were trapped solely with optical force and sectional images were acquired. This now approach has the potential to extend the applications of light sheet imaging significantly.

  19. A model and simulation to predict 3D imaging LADAR sensor systems performance in real-world type environments

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Russo, Leonard E.

    2006-08-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. Accurate methods to model and simulate performance from 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation here is developed expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) detector noise figure; 4) detector gain; 5) target attributes; 6) atmospheric transmission; 7) atmospheric backscatter; 8) atmospheric turbulence; 9) obscurants; 10) obscurant path length, and; 11) platform motion. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel. Here, noise sources and gain are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel for the entire array. Model outputs are 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array.

  20. The experiment study of image acquisition system based on 3D machine vision

    NASA Astrophysics Data System (ADS)

    Zhou, Haiying; Xiao, Zexin; Zhang, Xuefei; Wei, Zhe

    2011-11-01

    Binocular vision is one of the key technology in three-dimensional reconstructed of scene of three-dimensional machine vision. Important information of three-dimensional image could be acquired by binocular vision. When use it, we first get two or more pictures by camera, then we could get three-dimensional imformation included in these pictures by geometry and other relationship. In order to measurement accuracy of image acquisition system improved, image acquisition system of binocular vision about scene three-dimensional reconstruction is studyed in this article. Base on parallax principle and human eye binocular imaging, image acquired system between double optical path and double CCD mothd is comed up with. Experiment could obtain the best angle of double optical path optical axis and the best operating distance of double optical path. Then, through the bset angle of optical axis of double optical path and the best operating distance of double optical path, the centre distance of double CCD could be made sure. The two images of the same scene with different viewpoints is shoot by double CCD. This two images could establish well foundation for three-dimensional reconstructed of image processing in the later period. Through the experimental data shows the rationality of this method.

  1. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  2. Evolving technologies for growing, imaging and analyzing 3D root system architecture of crop plants.

    PubMed

    Piñeros, Miguel A; Larson, Brandon G; Shaff, Jon E; Schneider, David J; Falcão, Alexandre Xavier; Yuan, Lixing; Clark, Randy T; Craft, Eric J; Davis, Tyler W; Pradier, Pierre-Luc; Shaw, Nathanael M; Assaranurak, Ithipong; McCouch, Susan R; Sturrock, Craig; Bennett, Malcolm; Kochian, Leon V

    2016-03-01

    A plant's ability to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, can be strongly influenced by root system architecture (RSA), the three-dimensional distribution of the different root types in the soil. The ability to image, track and quantify these root system attributes in a dynamic fashion is a useful tool in assessing desirable genetic and physiological root traits. Recent advances in imaging technology and phenotyping software have resulted in substantive progress in describing and quantifying RSA. We have designed a hydroponic growth system which retains the three-dimensional RSA of the plant root system, while allowing for aeration, solution replenishment and the imposition of nutrient treatments, as well as high-quality imaging of the root system. The simplicity and flexibility of the system allows for modifications tailored to the RSA of different crop species and improved throughput. This paper details the recent improvements and innovations in our root growth and imaging system which allows for greater image sensitivity (detection of fine roots and other root details), higher efficiency, and a broad array of growing conditions for plants that more closely mimic those found under field conditions.

  3. Evolving technologies for growing, imaging and analyzing 3D root system architecture of crop plants.

    PubMed

    Piñeros, Miguel A; Larson, Brandon G; Shaff, Jon E; Schneider, David J; Falcão, Alexandre Xavier; Yuan, Lixing; Clark, Randy T; Craft, Eric J; Davis, Tyler W; Pradier, Pierre-Luc; Shaw, Nathanael M; Assaranurak, Ithipong; McCouch, Susan R; Sturrock, Craig; Bennett, Malcolm; Kochian, Leon V

    2016-03-01

    A plant's ability to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, can be strongly influenced by root system architecture (RSA), the three-dimensional distribution of the different root types in the soil. The ability to image, track and quantify these root system attributes in a dynamic fashion is a useful tool in assessing desirable genetic and physiological root traits. Recent advances in imaging technology and phenotyping software have resulted in substantive progress in describing and quantifying RSA. We have designed a hydroponic growth system which retains the three-dimensional RSA of the plant root system, while allowing for aeration, solution replenishment and the imposition of nutrient treatments, as well as high-quality imaging of the root system. The simplicity and flexibility of the system allows for modifications tailored to the RSA of different crop species and improved throughput. This paper details the recent improvements and innovations in our root growth and imaging system which allows for greater image sensitivity (detection of fine roots and other root details), higher efficiency, and a broad array of growing conditions for plants that more closely mimic those found under field conditions. PMID:26683583

  4. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  5. Proposed NRC portable target case for short-range triangulation-based 3D imaging systems characterization

    NASA Astrophysics Data System (ADS)

    Carrier, Benjamin; MacKinnon, David; Cournoyer, Luc; Beraldin, J.-Angelo

    2011-03-01

    The National Research Council of Canada (NRC) is currently evaluating and designing artifacts and methods to completely characterize 3-D imaging systems. We have gathered a set of artifacts to form a low-cost portable case and provide a clearly-defined set of procedures for generating characteristic values using these artifacts. In its current version, this case is specifically designed for the characterization of short-range (standoff distance of 1 centimeter to 3 meters) triangulation-based 3-D imaging systems. The case is known as the "NRC Portable Target Case for Short-Range Triangulation-based 3-D Imaging Systems" (NRC-PTC). The artifacts in the case have been carefully chosen for their geometric, thermal, and optical properties. A set of characterization procedures are provided with these artifacts based on procedures either already in use or are based on knowledge acquired from various tests carried out by the NRC. Geometric dimensioning and tolerancing (GD&T), a well-known terminology in the industrial field, was used to define the set of tests. The following parameters of a system are characterized: dimensional properties, form properties, orientation properties, localization properties, profile properties, repeatability, intermediate precision, and reproducibility. A number of tests were performed in a special dimensional metrology laboratory to validate the capability of the NRC-PTC. The NRC-PTC will soon be subjected to reproducibility testing using an intercomparison evaluation to validate its use in different laboratories.

  6. Geodetic Reference System Transformations of 3d Archival Geospatial Data Using a Single SSC Terrasar-X Image

    NASA Astrophysics Data System (ADS)

    Vassilaki, D. I.; Stamos, A. A.

    2016-06-01

    Many older maps were created using reference coordinate systems which are no longer available, either because no information to a datum was taken in the first place or the reference system is forgotten. In other cases the relationship between the map's coordinate system is not known with precision, meaning that its absolute error is much larger than its relative error. In this paper the georeferencing of medium-scale maps is computed using a single TerraSAR-X image. A single TerraSAR-X image has high geolocation accuracy but it has no 3D information. The map, however, provides the missing 3D information, and thus it is possible to compute the georeferencing of the map using the TerraSAR-X geolocation information, assembling the information of both sources to produce 3D points in the reference system of the TerraSAR-X image. Two methods based on this concept are proposed. The methods are tested with real world examples and the results are promising for further research.

  7. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  8. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  9. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  10. Wide-field-of-view image pickup system for multiview volumetric 3D displays using multiple RGB-D cameras

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Kakeya, Hideki

    2014-03-01

    A real-time and wide-field-of-view image pickup system for coarse integral volumetric imaging (CIVI) is realized. This system is to apply CIVI display for live action videos generated by the real-time 3D reconstruction. By using multiple RGB-D cameras from different directions, a complete surface of the objects and a wide field of views can be shown in our CIVI displays. A prototype system is constructed and it works as follows. Firstly, image features and depth data are used for a fast and accurate calibration. Secondly, 3D point cloud data are obtained by each RGB-D camera and they are all converted into the same coordinate system. Thirdly, multiview images are constructed by perspective transformation from different viewpoints. Finally, the image for each viewpoint is divided depending on the depth of each pixel for a volumetric view. The experiments show a better result than using only one RGB-D camera and the whole system works on the real-time basis.

  11. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  12. Scanning all-fiber-optic endomicroscopy system for 3D nonlinear optical imaging of biological tissues

    PubMed Central

    Wu, Yicong; Leng, Yuxin; Xi, Jiefeng; Li, Xingde

    2009-01-01

    An extremely compact all-fiber-optic scanning endomicroscopy system was developed for two-photon fluorescence (TPF) and second harmonic generation (SHG) imaging of biological samples. A conventional double-clad fiber (DCF) was employed in the endomicroscope for single-mode femtosecond pulse delivery, multimode nonlinear optical signals collection and fast two-dimensional scanning. A single photonic bandgap fiber (PBF) with negative group velocity dispersion at two-photon excitation wavelength (i.e. ~810 nm) was used for pulse prechirping in replacement of a bulky grating/lens-based pulse stretcher. The combined use of DCF and PBF in the endomicroscopy system made the endomicroscope basically a plug-and-play unit. The excellent imaging ability of the extremely compact all-fiber-optic nonlinear optical endomicroscopy system was demonstrated by SHG imaging of rat tail tendon and depth-resolved TPF imaging of epithelial tissues stained with acridine orange. The preliminary results suggested the promising potential of this extremely compact all-fiber-optic endomicroscopy system for real-time assessment of both epithelial and stromal structures in luminal organs. PMID:19434122

  13. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  14. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  15. Full-color holographic 3D imaging system using color optical scanning holography

    NASA Astrophysics Data System (ADS)

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  16. Perception of detail in 3D images

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; Kaptein, Ronald

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived sharpness. However, people watching stereoscopic videos have reported that the 3D scene contained more details, compared to the 2D scene with identical spatial resolution. This is an interesting notion, that has never been tested in a systematic and quantitative way. To investigate this effect, we had people compare the amount of detail ("detailedness") in pairs of 2D and 3D images. A blur filter was applied to one of the two images, and the blur level was varied using an adaptive staircase procedure. In this way, the blur threshold for which the 2D and 3D image contained perceptually the same amount of detail could be found. Our results show that the 3D image needed to be blurred more than the 2D image. This confirms the earlier qualitative findings that 3D images contain perceptually more details than 2D images with the same spatial resolution.

  17. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  18. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    PubMed Central

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-01-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system. PMID:25686728

  19. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    NASA Astrophysics Data System (ADS)

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-02-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system.

  20. Evaluation of 3D imaging.

    PubMed

    Vannier, M W

    2000-10-01

    Interactive computer-based simulation is gaining acceptance for craniofacial surgical planning. Subjective visualization without objective measurement capability, however, severely limits the value of simulation since spatial accuracy must be maintained. This study investigated the error sources involved in one method of surgical simulation evaluation. Linear and angular measurement errors were found to be within +/- 1 mm and 1 degree. Surface match of scanned objects was slightly less accurate, with errors up to 3 voxels and 4 degrees, and Boolean subtraction methods were 93 to 99% accurate. Once validated, these testing methods were applied to objectively compare craniofacial surgical simulations to post-operative outcomes, and verified that the form of simulation used in this study yields accurate depictions of surgical outcome. However, to fully evaluate surgical simulation, future work is still required to test the new methods in sufficient numbers of patients to achieve statistically significant results. Once completely validated, simulation cannot only be used in pre-operative surgical planning, but also as a post-operative descriptor of surgical and traumatic physical changes. Validated image comparison methods can also show discrepancy of surgical outcome to surgical plan, thus allowing evaluation of surgical technique. PMID:11098409

  1. Occluded human recognition for a leader following system using 3D range and image data in forest environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Ilyas, Muhammad; Baeg, Seung-Ho; Park, Sangdeok

    2014-06-01

    This paper describes the occluded target recognition and tracking method for a leader-following system by fusing 3D range and image data acquired from 3D light detection and ranging (LIDAR) and a color camera installed on an autonomous vehicle in forest environment. During 3D data processing, the distance-based clustering method has an instinctive problem in close encounters. In the tracking phase, we divide an object tracking process into three phases based on occlusion scenario; before an occlusion (BO) phase, a partially or fully occlusion phase and after an occlusion (AO) phase. To improve the data association performance, we use camera's rich information to find correspondence among objects during above mentioned three phases of occlusion. In this paper, we solve a correspondence problem using the color features of human objects with the sum of squared distance (SSD) and the normalized cross correlation (NCC). The features are integrated with derived windows from Harris corner. The experimental results for a leader following on an autonomous vehicle are shown with LIDAR and camera for improving a data association problem in a multiple object tracking system.

  2. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  3. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  4. Mobile Biplane X-Ray Imaging System for Measuring 3D Dynamic Joint Motion During Overground Gait.

    PubMed

    Guan, Shanyuanye; Gray, Hans A; Keynejad, Farzad; Pandy, Marcus G

    2016-01-01

    Most X-ray fluoroscopy systems are stationary and impose restrictions on the measurement of dynamic joint motion; for example, knee-joint kinematics during gait is usually measured with the subject ambulating on a treadmill. We developed a computer-controlled, mobile, biplane, X-ray fluoroscopy system to track human body movement for high-speed imaging of 3D joint motion during overground gait. A robotic gantry mechanism translates the two X-ray units alongside the subject, tracking and imaging the joint of interest as the subject moves. The main aim of the present study was to determine the accuracy with which the mobile imaging system measures 3D knee-joint kinematics during walking. In vitro experiments were performed to measure the relative positions of the tibia and femur in an intact human cadaver knee and of the tibial and femoral components of a total knee arthroplasty (TKA) implant during simulated overground gait. Accuracy was determined by calculating mean, standard deviation and root-mean-squared errors from differences between kinematic measurements obtained using volumetric models of the bones and TKA components and reference measurements obtained from metal beads embedded in the bones. Measurement accuracy was enhanced by the ability to track and image the joint concurrently. Maximum root-mean-squared errors were 0.33 mm and 0.65° for translations and rotations of the TKA knee and 0.78 mm and 0.77° for translations and rotations of the intact knee, which are comparable to results reported for treadmill walking using stationary biplane systems. System capability for in vivo joint motion measurement was also demonstrated for overground gait.

  5. High-quality 3-D coronary artery imaging on an interventional C-arm x-ray system

    SciTech Connect

    Hansis, Eberhard; Carroll, John D.; Schaefer, Dirk; Doessel, Olaf; Grass, Michael

    2010-04-15

    Purpose: Three-dimensional (3-D) reconstruction of the coronary arteries during a cardiac catheter-based intervention can be performed from a C-arm based rotational x-ray angiography sequence. It can support the diagnosis of coronary artery disease, treatment planning, and intervention guidance. 3-D reconstruction also enables quantitative vessel analysis, including vessel dynamics from a time-series of reconstructions. Methods: The strong angular undersampling and motion effects present in gated cardiac reconstruction necessitate the development of special reconstruction methods. This contribution presents a fully automatic method for creating high-quality coronary artery reconstructions. It employs a sparseness-prior based iterative reconstruction technique in combination with projection-based motion compensation. Results: The method is tested on a dynamic software phantom, assessing reconstruction accuracy with respect to vessel radii and attenuation coefficients. Reconstructions from clinical cases are presented, displaying high contrast, sharpness, and level of detail. Conclusions: The presented method enables high-quality 3-D coronary artery imaging on an interventional C-arm system.

  6. Fringe projection 3D microscopy with the general imaging model.

    PubMed

    Yin, Yongkai; Wang, Meng; Gao, Bruce Z; Liu, Xiaoli; Peng, Xiang

    2015-03-01

    Three-dimensional (3D) imaging and metrology of microstructures is a critical task for the design, fabrication, and inspection of microelements. Newly developed fringe projection 3D microscopy is presented in this paper. The system is configured according to camera-projector layout and long working distance lenses. The Scheimpflug principle is employed to make full use of the limited depth of field. For such a specific system, the general imaging model is introduced to reach a full 3D reconstruction. A dedicated calibration procedure is developed to realize quantitative 3D imaging. Experiments with a prototype demonstrate the accessibility of the proposed configuration, model, and calibration approach.

  7. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    PubMed

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach.

  8. 3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

    PubMed

    Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing

    2015-11-10

    This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach. PMID:26560789

  9. Dynamic contrast-enhanced 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  10. A volume of intersection approach for on-the-fly system matrix calculation in 3D PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Lougovski, A.; Hofheinz, F.; Maus, J.; Schramm, G.; Will, E.; van den Hoff, J.

    2014-02-01

    The aim of this study is the evaluation of on-the-fly volume of intersection computation for system’s geometry modelling in 3D PET image reconstruction. For this purpose we propose a simple geometrical model in which the cubic image voxels on the given Cartesian grid are approximated with spheres and the rectangular tubes of response (ToRs) are approximated with cylinders. The model was integrated into a fully 3D list-mode PET reconstruction for performance evaluation. In our model the volume of intersection between a voxel and the ToR is only a function of the impact parameter (the distance between voxel centre to ToR axis) but is independent of the relative orientation of voxel and ToR. This substantially reduces the computational complexity of the system matrix calculation. Based on phantom measurements it was determined that adjusting the diameters of the spherical voxel size and the ToR in such a way that the actual voxel and ToR volumes are conserved leads to the best compromise between high spatial resolution, low noise, and suppression of Gibbs artefacts in the reconstructed images. Phantom as well as clinical datasets from two different PET systems (Siemens ECAT HR+ and Philips Ingenuity-TF PET/MR) were processed using the developed and the respective vendor-provided (line of intersection related) reconstruction algorithms. A comparison of the reconstructed images demonstrated very good performance of the new approach. The evaluation showed the respective vendor-provided reconstruction algorithms to possess 34-41% lower resolution compared to the developed one while exhibiting comparable noise levels. Contrary to explicit point spread function modelling our model has a simple straight-forward implementation and it should be easy to integrate into existing reconstruction software, making it competitive to other existing resolution recovery techniques.

  11. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  12. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  13. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds.

  14. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  15. Optical display of magnified, real and orthoscopic 3-D object images by moving-direct-pixel-mapping in the scalable integral-imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Piao, Yongri; Kim, Eun-Soo

    2011-10-01

    In this paper, we proposed a novel approach for reconstruction of the magnified, real and orthoscopic three-dimensional (3-D) object images by using the moving-direct-pixel-mapping (MDPM) method in the MALT(moving-array-lenslet-technique)-based scalable integral-imaging system. In the proposed system, multiple sets of elemental image arrays (EIAs) are captured with the MALT, and these picked-up EIAs are computationally transformed into the depth-converted ones by using the proposed MDPM method. Then, these depth-converted EIAs are combined and interlaced together to form an enlarged EIA, from which a magnified, real and orthoscopic 3-D object images can be optically displayed without any degradation of resolution. Good experimental results finally confirmed the feasibility of the proposed method.

  16. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  17. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  18. Commissioning of a 3D image-based treatment planning system for high-dose-rate brachytherapy of cervical cancer.

    PubMed

    Kim, Yongbok; Modrick, Joseph M; Pennington, Edward C; Kim, Yusung

    2016-01-01

    The objective of this work is to present commissioning procedures to clinically implement a three-dimensional (3D), image-based, treatment-planning system (TPS) for high-dose-rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8-1.0 mm on MRI when compared with X-rays. In-house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose-volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image-based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End-to-end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image-based TPS for HDR

  19. 3D seismic imaging, example of 3D area in the middle of Banat

    NASA Astrophysics Data System (ADS)

    Antic, S.

    2009-04-01

    3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells in this area but they are not enough deep to help in the interpretation. It was necessary to get better seismic image in deeper area. Acquisition parameters were satisfactory (good quality of input parameters, length of input data was 5 s, fold was up to 4000 %) and preprocessed data was satisfied. GeoDepth is an integrated system for 3D velocity model building and for 3D seismic imaging. Input data for 3D seismic imaging consist of preprocessing data sorted to CMP gathers and RMS stacking velocity functions. Other type of input data are geological information derived from well data, time migrated images and time migrated maps. Workflow for this job was: loading and quality control the input data (CMP gathers and velocity), creating initial RMS Velocity Volume, PSTM, updating the RMS Velocity Volume, PSTM, building the Initial Interval Velocity Model, PSDM, updating the Interval Velocity Model, PSDM. In the first stage the attempt is to derive initial velocity model as simple as possible as.The higher frequency velocity changes are obtained in the updating stage. The next step, after running PSTM, is the time to depth conversion. After the model is built, we generate a 3D interval velocity volume and run 3D pre-stack depth migration. The main method for updating velocities is 3D tomography. The criteria used in velocity model determination are based on the flatness of pre-stack migrated gathers or the quality of the stacked image. The standard processing ended with poststack 3D time migration. Prestack depth migration is one of the powerful tool available to the interpretator to develop an accurate velocity model and get

  20. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  1. Computational integral-imaging reconstruction-based 3-D volumetric target object recognition by using a 3-D reference object.

    PubMed

    Kim, Seung-Cheol; Park, Seok-Chan; Kim, Eun-Soo

    2009-12-01

    In this paper, we propose a novel computational integral-imaging reconstruction (CIIR)-based three-dimensional (3-D) image correlator system for the recognition of 3-D volumetric objects by employing a 3-D reference object. That is, a number of plane object images (POIs) computationally reconstructed from the 3-D reference object are used for the 3-D volumetric target recognition. In other words, simultaneous 3-D image correlations between two sets of target and reference POIs, which are depth-dependently reconstructed by using the CIIR method, are performed for effective recognition of 3-D volumetric objects in the proposed system. Successful experiments with this CIIR-based 3-D image correlator confirmed the feasibility of the proposed method.

  2. Evaluation of the Accuracy of a 3D Surface Imaging System for Patient Setup in Head and Neck Cancer Radiotherapy

    SciTech Connect

    Gopan, Olga; Wu Qiuwen

    2012-10-01

    Purpose: To evaluate the accuracy of three-dimensional (3D) surface imaging system (AlignRT) registration algorithms for head-and-neck cancer patient setup during radiotherapy. Methods and Materials: Eleven patients, each undergoing six repeated weekly helical computed tomography (CT) scans during treatment course (total 77 CTs including planning CT), were included in the study. Patient surface images used in AlignRT registration were not captured by the 3D cameras; instead, they were derived from skin contours from these CTs, thereby eliminating issues with immobilization masks. The results from surface registrations in AlignRT based on CT skin contours were compared to those based on bony anatomy registrations in Pinnacle{sup 3}, which was considered the gold standard. Both rigid and nonrigid types of setup errors were analyzed, and the effect of tumor shrinkage was investigated. Results: The maximum registration errors in AlignRT were 0.2 Degree-Sign for rotations and 0.7 mm for translations in all directions. The rigid alignment accuracy in the head region when applied to actual patient data was 1.1 Degree-Sign , 0.8 Degree-Sign , and 2.2 Degree-Sign in rotation and 4.5, 2.7, and 2.4 mm in translation along the vertical, longitudinal, and lateral axes at 90% confidence level. The accuracy was affected by the patient's weight loss during treatment course, which was patient specific. Selectively choosing surface regions improved registration accuracy. The discrepancy for nonrigid registration was much larger at 1.9 Degree-Sign , 2.4 Degree-Sign , and 4.5 Degree-Sign and 10.1, 11.9, and 6.9 mm at 90% confidence level. Conclusions: The 3D surface imaging system is capable of detecting rigid setup errors with good accuracy for head-and-neck cancer. Further investigations are needed to improve the accuracy in detecting nonrigid setup errors.

  3. Advances and considerations in technologies for growing, imaging, and analyzing 3-D root system architecture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The ability of a plant to mine the soil for nutrients and water is determined by how, where, and when roots are arranged in the soil matrix. The capacity of plant to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, is affected by root system architectu...

  4. 3D noninvasive, high-resolution imaging using a photoacoustic tomography (PAT) system and rapid wavelength-cycling lasers

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin; Gross, Daniel; Klosner, Marc; Chan, Gary; Wu, Chunbai; Heller, Donald F.

    2015-05-01

    Globally, cancer is a major health issue as advances in modern medicine continue to extend the human life span. Breast cancer ranks second as a cause of cancer death in women in the United States. Photoacoustic (PA) imaging (PAI) provides high molecular contrast at greater depths in tissue without the use of ionizing radiation. In this work, we describe the development of a PA tomography (PAT) system and a rapid wavelength-cycling Alexandrite laser designed for clinical PAI applications. The laser produces 450 mJ/pulse at 25 Hz to illuminate the entire breast, which eliminates the need to scan the laser source. Wavelength cycling provides a pulse sequence in which the output wavelength repeatedly alternates between 755 nm and 797 nm rapidly within milliseconds. We present imaging results of breast phantoms with inclusions of different sizes at varying depths, obtained with this laser source, a 5-MHz 128-element transducer and a 128-channel Verasonics system. Results include PA images and 3D reconstruction of the breast phantom at 755 and 797 nm, delineating the inclusions that mimic tumors in the breast.

  5. High-resolution medical imaging system for 3D imaging of radioactive sources with 1-mm FWHM spatial resolution

    NASA Astrophysics Data System (ADS)

    Smither, Robert K.

    2003-06-01

    This paper describes a modification of a new imaging system developed at Argonne National Laboratory that has the potential of achieving a spatial resolution of 1 mm FWHM. The imaging system uses a crystal diffraction lens to focus gamma rays from the radioactive source. The medical imaging application of this system would be to detect small amounts of radioactivity in the human body that would be associated with cancer. The best spatial resolution obtained with the present lens at the time of the presentation made at the Medical Imaging Symposium 2001, was 6.7 mm FWHM for a 1-mm-diameter source. Since then it has been possible to improve the spacial resolution of the lens system to 3 mm FWHM. Experiments with the original lens system have led to a new design for a lens system that could have a spacial resolution of 1 mm FWHM. This is accomplished by: one, reducing the radial dimension of the crystals, and two, by replacing the small individual crystals with bent strips of single-crystalline material. Experiments are under way to test this approach.

  6. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  7. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. Automating Shallow 3D Seismic Imaging

    SciTech Connect

    Steeples, Don; Tsoflias, George

    2009-01-15

    Our efforts since 1997 have been directed toward developing ultra-shallow seismic imaging as a cost-effective method applicable to DOE facilities. This report covers the final year of grant-funded research to refine 3D shallow seismic imaging, which built on a previous 7-year grant (FG07-97ER14826) that refined and demonstrated the use of an automated method of conducting shallow seismic surveys; this represents a significant departure from conventional seismic-survey field procedures. The primary objective of this final project was to develop an automated three-dimensional (3D) shallow-seismic reflection imaging capability. This is a natural progression from our previous published work and is conceptually parallel to the innovative imaging methods used in the petroleum industry.

  10. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  11. New microangiography system development providing improved small vessel imaging, increased contrast-to-noise ratios, and multiview 3D reconstructions

    NASA Astrophysics Data System (ADS)

    Kuhls, Andrew T.; Patel, Vikas; Ionita, Ciprian; Noël, Peter B.; Walczak, Alan M.; Rangwala, Hussain S.; Hoffmann, Kenneth R.; Rudin, Stephen

    2006-03-01

    A new microangiographic system (MA) integrated into a c-arm gantry has been developed allowing precise placement of a MA at the exact same angle as the standard x-ray image intensifier (II) with unchanged source and object position. The MA can also be arbitrarily moved about the object and easily moved into the field of view (FOV) in front of the lower resolution II when higher resolution angiographic sequences are needed. The benefits of this new system are illustrated in a neurovascular study, where a rabbit is injected with contrast media for varying oblique angles. Digital subtraction angiographic (DSA) images were obtained and compared using both the MA and II detectors for the same projection view. Vessels imaged with the MA appear sharper with smaller vessels visualized. Visualization of ~100 μm vessels was possible with the MA whereas not with the II. Further, the MA could better resolve vessel overlap. Contrast to noise ratios (CNR) were calculated for vessels of varying sizes for the MA versus the II and were found to be similar for large vessels, approximately double for medium vessels, and infinitely better for the smallest vessels. In addition, a 3D reconstruction of selected vessel segments was performed, using multiple (three) projections at oblique angles, for each detector. This new MA/II integrated system should lead to improved diagnosis and image guidance of neurovascular interventions by enabling initial guidance with the low resolution large FOV II combined with use of the high resolution MA during critical parts of diagnostic and interventional procedures.

  12. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  13. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  14. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    NASA Astrophysics Data System (ADS)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  15. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  16. An ultrasound tomography system with polyvinyl alcohol (PVA) moldings for coupling: in vivo results for 3-D pulse-echo imaging of the female breast.

    PubMed

    Koch, Andreas; Stiller, Florian; Lerch, Reinhard; Ermert, Helmut

    2015-02-01

    Full-angle spatial compounding (FASC) is a concept for pulse-echo imaging using an ultrasound tomography (UST) system. With FASC, resolution is increased and speckles are suppressed by averaging pulse-echo data from 360°. In vivo investigations have already shown a great potential for 2-D FASC in the female breast as well as for finger-joint imaging. However, providing a small number of images of parallel cross-sectional planes with enhanced image quality is not sufficient for diagnosis. Therefore, volume data (3-D) is needed. For this purpose, we further developed our UST add-on system to automatically rotate a motorized array (3-D probe) around the object of investigation. Full integration of external motor and ultrasound electronics control in a custom-made program allows acquisition of 3-D pulse-echo RF datasets within 10 min. In case of breast cancer imaging, this concept also enables imaging of near-thorax tissue regions which cannot be achieved by 2-D FASC. Furthermore, moldings made of polyvinyl alcohol hydrogel (PVA-H) have been developed as a new acoustic coupling concept. It has a great potential to replace the water bath technique in UST, which is a critical concept with respect to clinical investigations. In this contribution, we present in vivo results for 3-D FASC applied to imaging a female breast which has been placed in a PVA-H molding during data acquisition. An algorithm is described to compensate time-of-flight and consider refraction at the water-PVA-H molding and molding-tissue interfaces. Therefore, the mean speed of sound (SOS) for the breast tissue is estimated with an image-based method. Our results show that the PVA-H molding concept is applicable and feasible and delivers good results. 3-D FASC is superior to 2-D FASC and provides 3-D volume data at increased image quality.

  17. Teat Morphology Characterization With 3D Imaging.

    PubMed

    Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

    2015-07-01

    The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, Fürstenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. PMID:25382725

  18. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  19. 3D elemental sensitive imaging by full-field XFCT.

    PubMed

    Deng, Biao; Du, Guohao; Zhou, Guangzhao; Wang, Yudan; Ren, Yuqi; Chen, Rongchang; Sun, Pengfei; Xie, Honglan; Xiao, Tiqiao

    2015-05-21

    X-ray fluorescence computed tomography (XFCT) is a stimulated emission tomography modality that maps the three-dimensional (3D) distribution of elements. Generally, XFCT is done by scanning a pencil-beam across the sample. This paper presents a feasibility study of full-field XFCT (FF-XFCT) for 3D elemental imaging. The FF-XFCT consists of a pinhole collimator and X-ray imaging detector with no energy resolution. A prototype imaging system was set up at the Shanghai Synchrotron Radiation Facility (SSRF) for imaging the phantom. The first FF-XFCT experimental results are presented. The cadmium (Cd) and iodine (I) distributions were reconstructed. The results demonstrate FF-XFCT is fit for 3D elemental imaging and the sensitivity of FF-XFCT is higher than a conventional CT system.

  20. Composite model of a 3-D image

    NASA Technical Reports Server (NTRS)

    Dukhovich, I. J.

    1980-01-01

    This paper presents a composite model of a moving (3-D) image especially useful for the sequential image processing and encoding. A non-linear predictor based on the composite model is described. The performance of this predictor is used as a measure of the validity of the model for a real image source. The minimization of a total mean square prediction error provides an inequality which determines a condition for the profitable use of the composite model and can serve as a decision device for the selection of the number of subsources within the model. The paper also describes statistical properties of the prediction error and contains results of computer simulation of two non-linear predictors in the case of perfect classification between subsources.

  1. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  2. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  3. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  4. Active and interactive floating image display using holographic 3D images

    NASA Astrophysics Data System (ADS)

    Morii, Tsutomu; Sakamoto, Kunio

    2006-08-01

    We developed a prototype tabletop holographic display system. This system consists of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using an RFID tag and the 3D display system using a holographic technology. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1,2,3. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the object recognition system and the spatial imaging display.

  5. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    PubMed Central

    Zhou, Jian; Qi, Jinyi

    2014-01-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D TOF PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon's raytracer, we propose another more simplified geometrical projector based on the Bresenham's raytracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a nonfactored model such as the analytical model while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve optimal reconstruction performance based on a sparse factorization model with an only image domain resolution model. PMID:24434568

  6. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2014-02-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D time-of-flight PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon’s ray-tracer, we propose another more simplified geometrical projector based on the Bresenham’s ray-tracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a non-factored model, while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve the optimal reconstruction performance based on a sparse factorization model with an image domain resolution model.

  7. [3D interactive clipping technology in medical image processing].

    PubMed

    Sun, Shaoping; Yang, Kaitai; Li, Bin; Li, Yuanjun; Liang, Jing

    2013-09-01

    The aim of this paper is to study the methods of 3D visualization and the 3D interactive clipping of CT/MRI image sequence in arbitrary orientation based on the Visualization Toolkit (VTK). A new method for 3D CT/MRI reconstructed image clipping is presented, which can clip 3D object and 3D space of medical image sequence to observe the inner structure using 3D widget for manipulating an infinite plane. Experiment results show that the proposed method can implement 3D interactive clipping of medical image effectively and get satisfied results with good quality in short time.

  8. Automatic 3D lesion segmentation on breast ultrasound images

    NASA Astrophysics Data System (ADS)

    Kuo, Hsien-Chi; Giger, Maryellen L.; Reiser, Ingrid; Drukker, Karen; Edwards, Alexandra; Sennett, Charlene A.

    2013-02-01

    Automatically acquired and reconstructed 3D breast ultrasound images allow radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in 3D ultrasound can be difficult and time consuming. In this study, we evaluate a 3D lesion segmentation method, which we had previously developed for breast CT, and investigate its robustness on lesions on 3D breast ultrasound images. Our dataset includes 98 3D breast ultrasound images obtained on an ABUS system from 55 patients containing 64 cancers. Cancers depicted on 54 US images had been clinically interpreted as negative on screening mammography and 44 had been clinically visible on mammography. All were from women with breast density BI-RADS 3 or 4. Tumor centers and margins were indicated and outlined by radiologists. Initial RGI-eroded contours were automatically calculated and served as input to the active contour segmentation algorithm yielding the final lesion contour. Tumor segmentation was evaluated by determining the overlap ratio (OR) between computer-determined and manually-drawn outlines. Resulting average overlap ratios on coronal, transverse, and sagittal views were 0.60 +/- 0.17, 0.57 +/- 0.18, and 0.58 +/- 0.17, respectively. All OR values were significantly higher the 0.4, which is deemed "acceptable". Within the groups of mammogram-negative and mammogram-positive cancers, the overlap ratios were 0.63 +/- 0.17 and 0.56 +/- 0.16, respectively, on the coronal views; with similar results on the other views. The segmentation performance was not found to be correlated to tumor size. Results indicate robustness of the 3D lesion segmentation technique in multi-modality 3D breast imaging.

  9. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  10. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  11. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  12. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  13. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  14. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  15. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  16. Optimized Bayes variational regularization prior for 3D PET images.

    PubMed

    Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

    2014-09-01

    A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

  17. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  18. Single 3D cell segmentation from optical CT microscope images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Reeves, Anthony P.

    2014-03-01

    The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

  19. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization

  20. Fast and efficient fully 3D PET image reconstruction using sparse system matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-10-01

    Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.

  1. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  2. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  3. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  4. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  5. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  6. A 3D digital medical photography system in paediatric medicine.

    PubMed

    Williams, Susanne K; Ellis, Lloyd A; Williams, Gigi

    2008-01-01

    In 2004, traditional clinical photography services at the Educational Resource Centre were extended using new technology. This paper describes the establishment of a 3D digital imaging system in a paediatric setting at the Royal Children's Hospital, Melbourne.

  7. Quantitative data quality metrics for 3D laser radar systems

    NASA Astrophysics Data System (ADS)

    Stevens, Jeffrey R.; Lopez, Norman A.; Burton, Robin R.

    2011-06-01

    Several quantitative data quality metrics for three dimensional (3D) laser radar systems are presented, namely: X-Y contrast transfer function, Z noise, Z resolution, X-Y edge & line spread functions, 3D point spread function and data voids. These metrics are calculated from both raw and/or processed point cloud data, providing different information regarding the performance of 3D imaging laser radar systems and the perceptual quality attributes of 3D datasets. The discussion is presented within the context of 3D imaging laser radar systems employing arrays of Geiger-mode Avalanche Photodiode (GmAPD) detectors, but the metrics may generally be applied to linear mode systems as well. An example for the role of these metrics in comparison of noise removal algorithms is also provided.

  8. Super deep 3D images from a 3D omnifocus video camera.

    PubMed

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  9. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  10. Computer-aided 3D display system and its application in 3D vision test

    NASA Astrophysics Data System (ADS)

    Shen, XiaoYun; Ma, Lan; Hou, Chunping; Wang, Jiening; Tang, Da; Li, Chang

    1998-08-01

    The computer aided 3D display system, flicker-free field sequential stereoscopic image display system, is newly developed. This system is composed of personal computer, liquid crystal glasses driving card, stereoscopic display software and liquid crystal glasses. It can display field sequential stereoscopic images at refresh rate of 70 Hz to 120 Hz. A typical application of this system, 3D vision test system, is mainly discussed in this paper. This stereoscopic vision test system can test stereoscopic acuity, cross disparity, uncross disparity and dynamic stereoscopic vision quantitatively. We have taken the use of random-dot- stereograms as stereoscopic vision test charts. Through practical test experiment between Anaglyph Stereoscopic Vision Test Charts and this stereoscopic vision test system, the statistical figures and test result is given out.

  11. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  12. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  13. A full-field and real-time 3D surface imaging augmented DOT system for in-vivo small animal studies

    NASA Astrophysics Data System (ADS)

    Yi, Steven X.; Yang, Bingcheng; Yin, Gongjie

    2010-02-01

    A crucial parameter in Diffuse Optical Tomography (DOT) is the construction of an accurate forward model, which greatly depends on tissue boundary. Since photon propagation is a three-dimensional volumetric problem, extraction and subsequent modeling of three-dimensional boundaries is essential. Original experimental demonstration of the feasibility of DOT to reconstruct absorbers, scatterers and fluorochromes used phantoms or tissues confined appropriately to conform to easily modeled geometries such as a slab or a cylinder. In later years several methods have been developed to model photon propagation through diffuse media with complex boundaries using numerical solutions of the diffusion or transport equation (finite elements or differences) or more recently analytical methods based on the tangent-plane method . While optical examinations performed simultaneously with anatomical imaging modalities such as MRI provide well-defined boundaries, very limited progress has been done so far in extracting full-field (360 degree) boundaries for in-vivo three-dimensional DOT stand-alone imaging. In this paper, we present a desktop multi-spectrum in-vivo 3D DOT system for small animal imaging. This system is augmented with Technest's full-field 3D cameras. The built system has the capability of acquiring 3D object surface profiles in real time and registering 3D boundary with diffuse tomography. Extensive experiments are performed on phantoms and small animals by our collaborators at the Center for Molecular Imaging Research (CMIR) at Massachusetts General Hospital (MGH) and Harvard Medical School. Data has shown successful reconstructed DOT data with improved accuracy.

  14. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  15. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  16. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  17. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  18. SU-E-J-13: Six Degree of Freedom Image Fusion Accuracy for Cranial Target Localization On the Varian Edge Stereotactic Radiosurgery System: Comparison Between 2D/3D and KV CBCT Image Registration

    SciTech Connect

    Xu, H; Song, K; Chetty, I; Kim, J; Wen, N

    2015-06-15

    Purpose: To determine the 6 degree of freedom systematic deviations between 2D/3D and CBCT image registration with various imaging setups and fusion algorithms on the Varian Edge Linac. Methods: An anthropomorphic head phantom with radio opaque targets embedded was scanned with CT slice thicknesses of 0.8, 1, 2, and 3mm. The 6 DOF systematic errors were assessed by comparing 2D/3D (kV/MV with CT) with 3D/3D (CBCT with CT) image registrations with different offset positions, similarity measures, image filters, and CBCT slice thicknesses (1 and 2 mm). The 2D/3D registration accuracy of 51 fractions for 26 cranial SRS patients was also evaluated by analyzing 2D/3D pre-treatment verification taken after 3D/3D image registrations. Results: The systematic deviations of 2D/3D image registration using kV- kV, MV-kV and MV-MV image pairs were within ±0.3mm and ±0.3° for translations and rotations with 95% confidence interval (CI) for a reference CT with 0.8 mm slice thickness. No significant difference (P>0.05) on target localization was observed between 0.8mm, 1mm, and 2mm CT slice thicknesses with CBCT slice thicknesses of 1mm and 2mm. With 3mm CT slice thickness, both 2D/3D and 3D/3D registrations performed less accurately in longitudinal direction than thinner CT slice thickness (0.60±0.12mm and 0.63±0.07mm off, respectively). Using content filter and using similarity measure of pattern intensity instead of mutual information, improved the 2D/3D registration accuracy significantly (P=0.02 and P=0.01, respectively). For the patient study, means and standard deviations of residual errors were 0.09±0.32mm, −0.22±0.51mm and −0.07±0.32mm in VRT, LNG and LAT directions, respectively, and 0.12°±0.46°, −0.12°±0.39° and 0.06°±0.28° in RTN, PITCH, and ROLL directions, respectively. 95% CI of translational and rotational deviations were comparable to those in phantom study. Conclusion: 2D/3D image registration provided on the Varian Edge radiosurgery, 6 DOF

  19. 3D imaging of fetus vertebra by synchrotron radiation microtomography

    NASA Astrophysics Data System (ADS)

    Peyrin, Francoise; Pateyron-Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

    1997-10-01

    A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

  20. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  1. Evaluation of a prototype 3D ultrasound system for multimodality imaging of cervical nodes for adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle; Fava, Palma; Cury, Fabio; Vuong, Te; Falco, Tony; Verhaegen, Frank

    2007-03-01

    Sonography has good topographic accuracy for superficial lymph node assessment in patients with head and neck cancers. It is therefore an ideal non-invasive tool for precise inter-fraction volumetric analysis of enlarged cervical nodes. In addition, when registered with computed tomography (CT) images, ultrasound information may improve target volume delineation and facilitate image-guided adaptive radiation therapy. A feasibility study was developed to evaluate the use of a prototype ultrasound system capable of three dimensional visualization and multi-modality image fusion for cervical node geometry. A ceiling-mounted optical tracking camera recorded the position and orientation of a transducer in order to synchronize the transducer's position with respect to the room's coordinate system. Tracking systems were installed in both the CT-simulator and radiation therapy treatment rooms. Serial images were collected at the time of treatment planning and at subsequent treatment fractions. Volume reconstruction was performed by generating surfaces around contours. The quality of the spatial reconstruction and semi-automatic segmentation was highly dependent on the system's ability to track the transducer throughout each scan procedure. The ultrasound information provided enhanced soft tissue contrast and facilitated node delineation. Manual segmentation was the preferred method to contour structures due to their sonographic topography.

  2. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  3. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  4. Extraction of 3D information from sonar image sequences.

    PubMed

    Trucco, A; Curletto, S

    2003-01-01

    This paper describes a set of methods that make it possible to estimate the position of a feature inside a three-dimensional (3D) space by starting from a sequence of two-dimensional (2D) acoustic images of the seafloor acquired with a sonar system. Typical sonar imaging systems are able to generate just 2D images, and the acquisition of 3D information involves sharp increases in complexity and costs. The front-scan sonar proposed in this paper is a new equipment devoted to acquiring a 2D image of the seafloor to sail over, and allows one to collect a sequence of images showing a specific feature during the approach of the ship. This fact seems to make it possible to recover the 3D position of a feature by comparing the feature positions along the sequence of images acquired from different (known) ship positions. This opportunity is investigated in the paper, where it is shown that encouraging results have been obtained by a processing chain composed of some blocks devoted to low-level processing, feature extraction and analysis, a Kalman filter for robust feature tracking, and some ad hoc equations for depth estimation and averaging. A statistical error analysis demonstrated the great potential of the proposed system also if some inaccuracies affect the sonar measures and the knowledge of the ship position. This was also confirmed by several tests performed on both simulated and real sequences, obtaining satisfactory results on both the feature tracking and, above all, the estimation of the 3D position.

  5. Defining the medial-lateral axis of an anatomical femur coordinate system using freehand 3D ultrasound imaging.

    PubMed

    Passmore, Elyse; Sangeux, Morgan

    2016-03-01

    Hip rotation from gait analysis informs clinical decisions regarding correction of femoral torsional deformities. However, it is among the least repeatable due to discrepancies in determining the medial-lateral axis of the femur. Conventional or functional calibration methods may be used to define the axis but there is no benchmark to evaluate these methods. Freehand 3D ultrasound, the coupling of ultrasound with 3D motion capture, may provide such a benchmark. We measured the accuracy in vitro and repeatability in vivo of determining the femur condylar axis from freehand 3D ultrasound. The condylar axis provided the reference medial-lateral axis of the femur and was used to evaluate one conventional method and three functional calibration methods, applied to three calibration movements. Ten healthy subjects (20 limbs) underwent 3D gait analysis and freehand 3D ultrasound. The functional calibration methods were a transformation technique, a geometrical method and a method that minimises variance of knee varus-valgus kinematics (DynaKAD). The conventional method used markers over the femoral epicondyles. The condylar axis determined by 3D ultrasound showed good accuracy in vitro, 1.6° (SD: 0.3°) and good repeatability in vivo, 0.2° (RSMD: 2.3°). The DynaKAD method applied to the walking calibration movement determined the medial-lateral axis closest to the ultrasound reference. The average angular difference in the transverse plane was 3.1° (SD: 6.1°). Freehand 3D ultrasound offers an accurate, non-invasive and relatively fast method to locate the medial-lateral axis of the femur for gait analysis.

  6. Development of a Hausdorff distance based 3D quantification technique to evaluate the CT imaging system impact on depiction of lesion morphology

    NASA Astrophysics Data System (ADS)

    Sahbaee, Pooyan; Robins, Marthony; Solomon, Justin; Samei, Ehsan

    2016-04-01

    The purpose of this study was to develop a 3D quantification technique to assess the impact of imaging system on depiction of lesion morphology. Regional Hausdorff Distance (RHD) was computed from two 3D volumes: virtual mesh models of synthetic nodules or "virtual nodules" and CT images of physical nodules or "physical nodules". The method can be described in following steps. First, the synthetic nodule was inserted into anthropomorphic Kyoto thorax phantom and scanned in a Siemens scanner (Flash). Then, nodule was segmented from the image. Second, in order to match the orientation of the nodule, the digital models of the "virtual" and "physical" nodules were both geometrically translated to the origin. Then, the "physical" was gradually rotated at incremental 10 degrees. Third, the Hausdorff Distance was calculated from each pair of "virtual" and "physical" nodules. The minimum HD value represented the most matching pair. Finally, the 3D RHD map and the distribution of RHD were computed for the matched pair. The technique was scalarized using the FWHM of the RHD distribution. The analysis was conducted for various shapes (spherical, lobular, elliptical, and speculated) of nodules. The calculated FWHM values of RHD distribution for the 8-mm spherical, lobular, elliptical, and speculated "virtual" and "physical" nodules were 0.23, 0.42, 0.33, and 0.49, respectively.

  7. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

  8. Dynamic 3D computed tomography scanner for vascular imaging

    NASA Astrophysics Data System (ADS)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  9. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  10. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  11. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  12. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  13. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  14. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  15. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  16. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  17. High-speed 3D imaging by DMD technology

    NASA Astrophysics Data System (ADS)

    Hoefling, Roland

    2004-05-01

    The paper presents an advanced solution for capturing the height of an object in addition to the 2D image as it is frequently desired in machine vision applications. Based upon the active fringe projection methodology, the system takes advantage of a series of patterns projected onto the object surface and observed by a camera to provide reliable, accurate and highly resolved 3D data from any scattering object surface. The paper shows how the recording of a projected image series can be significantly accelerated and improved in quality to overcome current limitations. The key is ALP - a metrology dedicated hardware design using the Discovery 1100 platform for the DMD micromirror device of Texas Instruments Inc. The paper describes how this DMD technology has been combined with latest LED illumination, high-performance optics, and recent digital camera solutions. The ALP based DMD projection can be exactly synchronized with one or multiple cameras so that gray value intensities generated by pulse-width modulation (PWM) are recorded with high linearity. Based upon these components, a novel 3D measuring system with outstanding properties is described. The "z-Snapper" represents a new class of 3D imaging devices, it is fast enough for time demanding in-line testing, and it can be built completely mobile: laptop based, hand-held, and battery powered. The turnkey system provides a "3D image" as simple as an usual b/w picture is grabbed. It can be instantly implemented into future machine vision applications that will benefit from the step into the third dimension.

  18. Development and evaluation of a LOR-based image reconstruction with 3D system response modeling for a PET insert with dual-layer offset crystal design

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhu; Stortz, Greg; Sossi, Vesna; Thompson, Christopher J.; Retière, Fabrice; Kozlowski, Piotr; Thiessen, Jonathan D.; Goertzen, Andrew L.

    2013-12-01

    In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal

  19. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  20. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  1. Phantom image results of an optimized full 3D USCT

    NASA Astrophysics Data System (ADS)

    Ruiter, Nicole V.; Zapf, Michael; Hopp, Torsten; Dapp, Robin; Gemmeke, Hartmut

    2012-03-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3DUSCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3DUSCT with nearly isotropic 3DPSF, realizing for the first time the full benefits of a 3Dsystem. In this paper results of the 3D point spread function measured with a dedicated phantom and images acquired with a clinical breast phantom are presented. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3TeslaMRI volume of the breast phantom. Image quality and resolution is isotropic in all three dimensions, confirming the successful optimization experimentally.

  2. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning

  3. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  4. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  5. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  6. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  7. Sparse aperture 3D passive image sensing and recognition

    NASA Astrophysics Data System (ADS)

    Daneshpanah, Mehdi

    The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results

  8. Use of INSAT-3D sounder and imager radiances in the 4D-VAR data assimilation system and its implications in the analyses and forecasts

    NASA Astrophysics Data System (ADS)

    Indira Rani, S.; Taylor, Ruth; George, John P.; Rajagopal, E. N.

    2016-05-01

    INSAT-3D, the first Indian geostationary satellite with sounding capability, provides valuable information over India and the surrounding oceanic regions which are pivotal to Numerical Weather Prediction. In collaboration with UK Met Office, NCMRWF developed the assimilation capability of INSAT-3D Clear Sky Brightness Temperature (CSBT), both from the sounder and imager, in the 4D-Var assimilation system being used at NCMRWF. Out of the 18 sounder channels, radiances from 9 channels are selected for assimilation depending on relevance of the information in each channel. The first three high peaking channels, the CO2 absorption channels and the three water vapor channels (channel no. 10, 11, and 12) are assimilated both over land and Ocean, whereas the window channels (channel no. 6, 7, and 8) are assimilated only over the Ocean. Measured satellite radiances are compared with that from short range forecasts to monitor the data quality. This is based on the assumption that the observed satellite radiances are free from calibration errors and the short range forecast provided by NWP model is free from systematic errors. Innovations (Observation - Forecast) before and after the bias correction are indicative of how well the bias correction works. Since the biases vary with air-masses, time, scan angle and also due to instrument degradation, an accurate bias correction algorithm for the assimilation of INSAT-3D sounder radiance is important. This paper discusses the bias correction methods and other quality controls used for the selected INSAT-3D sounder channels and the impact of bias corrected radiance in the data assimilation system particularly over India and surrounding oceanic regions.

  9. Method for the determination of the modulation transfer function (MTF) in 3D x-ray imaging systems with focus on correction for finite extent of test objects

    NASA Astrophysics Data System (ADS)

    Schäfer, Dirk; Wiegert, Jens; Bertram, Matthias

    2007-03-01

    It is well known that rotational C-arm systems are capable of providing 3D tomographic X-ray images with much higher spatial resolution than conventional CT systems. Using flat X-ray detectors, the pixel size of the detector typically is in the range of the size of the test objects. Therefore, the finite extent of the "point" source cannot be neglected for the determination of the MTF. A practical algorithm has been developed that includes bias estimation and subtraction, averaging in the spatial domain, and correction for the frequency content of the imaged bead or wire. Using this algorithm, the wire and the bead method are analyzed for flat detector based 3D X-ray systems with the use of standard CT performance phantoms. Results on both experimental and simulated data are presented. It is found that the approximation of applying the analysis of the wire method to a bead measurement is justified within 3% accuracy up to the first zero of the MTF.

  10. Dual-view 3D displays based on integral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Qiong-Hua; Deng, Huan; Wu, Fei

    2016-03-01

    We propose three dual-view integral imaging (DVII) three-dimensional (3D) displays. In the spatial-multiplexed DVII 3D display, each elemental image (EI) is cut into a left and right sub-EIs, and they are refracted to the left and right viewing zones by the corresponding micro-lens array (MLA). Different 3D images are reconstructed in the left and right viewing zones, and the viewing angle is decreased. In the DVII 3D display using polarizer parallax barriers, a polarizer parallax barrier is used in front of both the display panel and the MLA. The polarizer parallax barrier consists of two parts with perpendicular polarization directions. The elemental image array (EIA) is cut to left and right parts. The lights emitted from the left part are modulated by the left MLA and reconstruct a 3D image in the right viewing zone, whereas the lights emitted from the right part reconstruct another 3D image in the left viewing zone. The 3D resolution is decreased. In the time-multiplexed DVII 3D display, an orthogonal polarizer array is attached onto both the display panel and the MLA. The orthogonal polarizer array consists of horizontal and vertical polarizer units and the polarization directions of the adjacent units are orthogonal. In State 1, each EI is reconstructed by its corresponding micro-lens, whereas in State 2, each EI is reconstructed by its adjacent micro-lens. 3D images 1 and 2 are reconstructed alternately with a refresh rate up to 120HZ. The viewing angle and 3D resolution are the same as the conventional II 3D display.

  11. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  12. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  13. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  14. The Anatomy of a Fumarole inferred from a 3-D High-Resolution Electrical Resistivity Image of Solfatara Hydrothermal System (Phlegrean Fields, Italy)

    NASA Astrophysics Data System (ADS)

    Gresse, M.; Vandemeulebrouck, J.; Chiodini, G.; Byrdina, S.; Lebourg, T.; Johnson, T. C.

    2015-12-01

    Solfatara, the most active crater in the Phlegrean Fields volcanic complex, shows since ten years a remarkable renewal of activity characterized by an increase of CO2 total degassing from 1500 up to 3000 tons/day, associated with a large ground uplift (Chiodini et al., 2015). In order to precisely image the structure of the shallow hydrothermal system, we performed an extended electrical DC resistivity survey at Solfatara, with about 40 2-D profiles of length up to 1 km, as well as soil temperature and CO2 flux measurements over the area. We then realized a 3-D inversion from the ~40 000 resistivity data points, using E4D code (Johnson et al., 2010). At large scale, results clearly delineate two contrasted structures: - A very conductive body (resistivity < 5 Ohm.m) located beneath the Fangaia mud pools, and likely associated to a mineralized liquid rich plume. - An elongated more resistive body (20-30 Ohm.m) connected to the main fumarolic area and interpreted as the gas reservoir feeding the fumaroles. At smaller scale, our resistivity model originally highlights the 3-D anatomy of a fumarole and the interactions between condensate layers and gas chimneys. This high-resolution image of the shallow hydrothermal structure is a new step for the modeling of this system.

  15. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  16. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  17. 3D Multifunctional Ablative Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  18. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  19. 3-D seismic imaging of complex geologies

    NASA Astrophysics Data System (ADS)

    Womble, David E.; Dosanjh, Sudip S.; Vandyke, John P.; Oldfield, Ron A.; Greenberg, David S.

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  20. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-01

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging. PMID:25836861

  1. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  2. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  3. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  4. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    SciTech Connect

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process. The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.

  5. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  6. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  7. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  8. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm. PMID:27410800

  9. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  10. 3D Imaging of the OH mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Kouahla, M. N.; Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Vidal, E.; Veliz, O.

    2010-01-01

    A new and original stereo imaging method is introduced to measure the altitude of the OH nightglow layer and provide a 3D perspective map of the altitude of the layer centroid. Near-IR photographs of the OH layer are taken at two sites separated by a 645 km distance. Each photograph is processed in order to provide a satellite view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized cross-correlation coefficient (NCC). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12°09‧08.2″ S, 75°33‧49.3″ W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16°33‧17.6″ S, 71°39‧59.4″ W, altitude 2272 m) close to Arequipa. 3D maps of the layer surface were retrieved and compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 86.3 km on July 26. Comparable relief wavy features appear in the 3D and intensity maps. It is shown that the vertical amplitude of the wave system varies as exp (Δz/2H) within the altitude range Δz = 83.5-88.0 km, H being the scale height. The oscillatory kinetic energy at the altitude of the OH layer is comprised between 3 × 10-4 and 5.4 × 10-4 J/m3, which is 2-3 times smaller than the values derived from partial radio wave at 52°N latitude.

  11. Development of 3D microwave imaging reflectometry in LHD (invited).

    PubMed

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  12. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  13. Fast iterative image reconstruction of 3D PET data

    SciTech Connect

    Kinahan, P.E.; Townsend, D.W.; Michel, C.

    1996-12-31

    For count-limited PET imaging protocols, two different approaches to reducing statistical noise are volume, or 3D, imaging to increase sensitivity, and statistical reconstruction methods to reduce noise propagation. These two approaches have largely been developed independently, likely due to the perception of the large computational demands of iterative 3D reconstruction methods. We present results of combining the sensitivity of 3D PET imaging with the noise reduction and reconstruction speed of 2D iterative image reconstruction methods. This combination is made possible by using the recently-developed Fourier rebinning technique (FORE), which accurately and noiselessly rebins 3D PET data into a 2D data set. The resulting 2D sinograms are then reconstructed independently by the ordered-subset EM (OSEM) iterative reconstruction method, although any other 2D reconstruction algorithm could be used. We demonstrate significant improvements in image quality for whole-body 3D PET scans by using the FORE+OSEM approach compared with the standard 3D Reprojection (3DRP) algorithm. In addition, the FORE+OSEM approach involves only 2D reconstruction and it therefore requires considerably less reconstruction time than the 3DRP algorithm, or any fully 3D statistical reconstruction algorithm.

  14. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  15. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  16. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  17. Tool 3D geometry measurement system

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Ni, Jun; Sun, Yi; Lin, Xuewen

    2001-10-01

    A new non-contact tool 3D geometry measurement system based on machine vision is described. In this system, analytical and optimization methods are used respectively to achieve system calibration, which can determine the rotation center of the drill. The data merging method is fully studied which can translate the scattered different groups of raw data in sensor coordinates into drill coordinates and get 3-D topography of the drill body. Corresponding data processing methods for drill geometry are also studied. Statistical methods are used to remove the outliers. Laplacian of Gaussian operator are used to detect the boundary on drill cross-section and drill tip profile. The arithmetic method for calculating the parameters is introduced. The initial measurement results are presented. The cross-section profile, drill tips geometry are shown. Pictures of drill wear on drill tip are given. Parameters extracted from the cross-section are listed. Compared with the measurement results using CMM, the difference between this drill geometry measurement system and CMM is, Radius of drill: 0.020mm, Helix angle: 1.310, Web thickness: 0.034mm.

  18. Imaging and 3D morphological analysis of collagen fibrils.

    PubMed

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants.

  19. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    PubMed Central

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  20. Using Single-Camera 3-D Imaging to Guide Material Handling Robots in a Nuclear Waste Package Closure System

    SciTech Connect

    Rodney M. Shurtliff

    2005-09-01

    Nuclear reactors for generating energy and conducting research have been in operation for more than 50 years, and spent nuclear fuel and associated high-level waste have accumulated in temporary storage. Preparing this spent fuel and nuclear waste for safe and permanent storage in a geological repository involves developing a robotic packaging system—a system that can accommodate waste packages of various sizes and high levels of nuclear radiation. During repository operation, commercial and government-owned spent nuclear fuel and high-level waste will be loaded into casks and shipped to the repository, where these materials will be transferred from the casks into a waste package, sealed, and placed into an underground facility. The waste packages range from 12 to 20 feet in height and four and a half to seven feet in diameter. Closure operations include sealing the waste package and all its associated functions, such as welding lids onto the container, filling the inner container with an inert gas, performing nondestructive examinations on welds, and conducting stress mitigation. The Idaho National Laboratory is designing and constructing a prototype Waste Package Closure System (WPCS). Control of the automated material handling is an important part of the overall design. Waste package lids, welding equipment, and other tools must be moved in and around the closure cell during the closure process. These objects are typically moved from tool racks to a specific position on the waste package to perform a specific function. Periodically, these objects are moved from a tool rack or the waste package to the adjacent glovebox for repair or maintenance. Locating and attaching to these objects with the remote handling system, a gantry robot, in a loosely fixtured environment is necessary for the operation of the closure cell. Reliably directing the remote handling system to pick and place the closure cell equipment within the cell is the major challenge.

  1. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  2. Alignment issues, correlation techniques and their assessment for a visible light imaging-based 3D printer quality control system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2016-05-01

    Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.

  3. A dynamic 3D foot reconstruction system.

    PubMed

    Thabet, Ali K; Trucco, Emanuele; Salvi, Joaquim; Wang, Weijie; Abboud, Rami J

    2011-01-01

    Foot problems are varied and range from simple disorders through to complex diseases and joint deformities. Wherever possible, the use of insoles, or orthoses, is preferred over surgery. Current insole design techniques are based on static measurements of the foot, despite the fact that orthoses are prevalently used in dynamic conditions while walking or running. This paper presents the design and implementation of a structured-light prototype system providing dense three dimensional (3D) measurements of the foot in motion, and its use to show that foot measurements in dynamic conditions differ significantly from their static counterparts. The input to the system is a video sequence of a foot during a single step; the output is a 3D reconstruction of the plantar surface of the foot for each frame of the input. Engineering and clinical tests were carried out for the validation of the system. The accuracy of the system was found to be 0.34 mm with planar test objects. In tests with real feet, the system proved repeatable, with reconstruction differences between trials one week apart averaging 2.44 mm (static case) and 2.81 mm (dynamic case). Furthermore, a study was performed to compare the effective length of the foot between static and dynamic reconstructions using the 4D system. Results showed an average increase of 9 mm for the dynamic case. This increase is substantial for orthotics design, cannot be captured by a static system, and its subject-specific measurement is crucial for the design of effective foot orthoses.

  4. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  5. Tactile-optical 3D sensor applying image processing

    NASA Astrophysics Data System (ADS)

    Neuschaefer-Rube, Ulrich; Wissmann, Mark

    2009-01-01

    The tactile-optical probe (so-called fiber probe) is a well-known probe in micro-coordinate metrology. It consists of an optical fiber with a probing element at its end. This probing element is adjusted in the imaging plane of the optical system of an optical coordinate measuring machine (CMM). It can be illuminated through the fiber by a LED. The position of the probe is directly detected by image processing algorithms available in every modern optical CMM and not by deflections at the fixation of the probing shaft. Therefore, the probing shaft can be very thin and flexible. This facilitates the measurement with very small probing forces and the realization of very small probing elements (diameter: down to 10 μm). A limitation of this method is that at present the probe does not have full 3D measurement capability. At the Physikalisch-Technische Bundesanstalt (PTB), several arrangements and measurement principles for a full 3D tactile-optical probe have been implemented and tested successfully in cooperation with Werth-Messtechnik, Giessen, Germany. This contribution provides an overview of the results of these activities.

  6. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  7. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  8. 3D stereophotogrammetric image superimposition onto 3D CT scan images: the future of orthognathic surgery. A pilot study.

    PubMed

    Khambay, Balvinder; Nebel, Jean-Christophe; Bowman, Janet; Walker, Fraser; Hadley, Donald M; Ayoub, Ashraf

    2002-01-01

    The aim of this study was to register and assess the accuracy of the superimposition method of a 3-dimensional (3D) soft tissue stereophotogrammetric image (C3D image) and a 3D image of the underlying skeletal tissue acquired by 3D spiral computerized tomography (CT). The study was conducted on a model head, in which an intact human skull was embedded with an overlying latex mask that reproduced anatomic features of a human face. Ten artificial radiopaque landmarks were secured to the surface of the latex mask. A stereophotogrammetric image of the mask and a 3D spiral CT image of the model head were captured. The C3D image and the CT images were registered for superimposition by 3 different methods: Procrustes superimposition using artificial landmarks, Procrustes analysis using anatomic landmarks, and partial Procrustes analysis using anatomic landmarks and then registration completion by HICP (a modified Iterative Closest Point algorithm) using a specified region of both images. The results showed that Procrustes superimposition using the artificial landmarks produced an error of superimposition on the order of 10 mm. Procrustes analysis using anatomic landmarks produced an error in the order of 2 mm. Partial Procrustes analysis using anatomic landmarks followed by HICP produced a superimposition accuracy of between 1.25 and 1.5 mm. It was concluded that a stereophotogrammetric and a 3D spiral CT scan image can be superimposed with an accuracy of between 1.25 and 1.5 mm using partial Procrustes analysis based on anatomic landmarks and then registration completion by HICP.

  9. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  10. Quantitative validation of 3D image registration techniques

    NASA Astrophysics Data System (ADS)

    Holton Tainter, Kerrie S.; Taneja, Udita; Robb, Richard A.

    1995-05-01

    Multimodality images obtained from different medical imaging systems such as magnetic resonance (MR), computed tomography (CT), ultrasound (US), positron emission tomography (PET), single photon emission computed tomography (SPECT) provide largely complementary characteristic or diagnostic information. Therefore, it is an important research objective to `fuse' or combine this complementary data into a composite form which would provide synergistic information about the objects under examination. An important first step in the use of complementary fused images is 3D image registration, where multi-modality images are brought into spatial alignment so that the point-to-point correspondence between image data sets is known. Current research in the field of multimodality image registration has resulted in the development and implementation of several different registration algorithms, each with its own set of requirements and parameters. Our research has focused on the development of a general paradigm for measuring, evaluating and comparing the performance of different registration algorithms. Rather than evaluating the results of one algorithm under a specific set of conditions, we suggest a general approach to validation using simulation experiments, where the exact spatial relationship between data sets is known, along with phantom data, to characterize the behavior of an algorithm via a set of quantitative image measurements. This behavior may then be related to the algorithm's performance with real patient data, where the exact spatial relationship between multimodality images is unknown. Current results indicate that our approach is general enough to apply to several different registration algorithms. Our methods are useful for understanding the different sources of registration error and for comparing the results between different algorithms.

  11. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  12. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  13. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  14. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  15. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  16. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction.

    PubMed

    Min, Junhong; Holden, Seamus J; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-11-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum.

  17. Image enhancement and segmentation of fluid-filled structures in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Dudycha, Stephen; McMorrow, Gerald

    2003-05-01

    Segmentation of fluid-filled structures, such as the urinary bladder, from three-dimensional ultrasound images is necessary for measuring their volume. This paper describes a system for image enhancement, segmentation and volume measurement of fluid-filled structures on 3D ultrasound images. The system was applied for the measurement of urinary bladder volume. Results show an average error of less than 10% in the estimation of the total bladder volume.

  18. Mechanically assisted 3D ultrasound guided prostate biopsy system.

    PubMed

    Bax, Jeffrey; Cool, Derek; Gardi, Lori; Knight, Kerry; Smith, David; Montreuil, Jacques; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

    2008-12-01

    There are currently limitations associated with the prostate biopsy procedure, which is the most commonly used method for a definitive diagnosis of prostate cancer. With the use of two-dimensional (2D) transrectal ultrasound (TRUS) for needle-guidance in this procedure, the physician has restricted anatomical reference points for guiding the needle to target sites. Further, any motion of the physician's hand during the procedure may cause the prostate to move or deform to a prohibitive extent. These variations make it difficult to establish a consistent reference frame for guiding a needle. We have developed a 3D navigation system for prostate biopsy, which addresses these shortcomings. This system is composed of a 3D US imaging subsystem and a passive mechanical arm to minimize prostate motion. To validate our prototype, a series of experiments were performed on prostate phantoms. The 3D scan of the string phantom produced minimal geometric distortions, and the geometric error of the 3D imaging subsystem was 0.37 mm. The accuracy of 3D prostate segmentation was determined by comparing the known volume in a certified phantom to a reconstructed volume generated by our system and was shown to estimate the volume with less then 5% error. Biopsy needle guidance accuracy tests in agar prostate phantoms showed that the mean error was 2.1 mm and the 3D location of the biopsy core was recorded with a mean error of 1.8 mm. In this paper, we describe the mechanical design and validation of the prototype system using an in vitro prostate phantom. Preliminary results from an ongoing clinical trial show that prostate motion is small with an in-plane displacement of less than 1 mm during the biopsy procedure.

  19. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  20. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  1. MMSE Reconstruction for 3D Freehand Ultrasound Imaging

    PubMed Central

    Huang, Wei; Zheng, Yibin

    2008-01-01

    The reconstruction of 3D ultrasound (US) images from mechanically registered, but otherwise irregularly positioned, B-scan slices is of great interest in image guided therapy procedures. Conventional 3D ultrasound algorithms have low computational complexity, but the reconstructed volume suffers from severe speckle contamination. Furthermore, the current method cannot reconstruct uniform high-resolution data from several low-resolution B-scans. In this paper, the minimum mean-squared error (MMSE) method is applied to 3D ultrasound reconstruction. Data redundancies due to overlapping samples as well as correlation of the target and speckle are naturally accounted for in the MMSE reconstruction algorithm. Thus, the reconstruction process unifies the interpolation and spatial compounding. Simulation results for synthetic US images are presented to demonstrate the excellent reconstruction. PMID:18382623

  2. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets.

    PubMed

    Peng, Hanchuan; Ruan, Zongcai; Long, Fuhui; Simpson, Julie H; Myers, Eugene W

    2010-04-01

    The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain. PMID:20231818

  3. 3D lung image retrieval using localized features

    NASA Astrophysics Data System (ADS)

    Depeursinge, Adrien; Zrimec, Tatjana; Busayarat, Sata; Müller, Henning

    2011-03-01

    The interpretation of high-resolution computed tomography (HRCT) images of the chest showing disorders of the lung tissue associated with interstitial lung diseases (ILDs) is time-consuming and requires experience. Whereas automatic detection and quantification of the lung tissue patterns showed promising results in several studies, its aid for the clinicians is limited to the challenge of image interpretation, letting the radiologists with the problem of the final histological diagnosis. Complementary to lung tissue categorization, providing visually similar cases using content-based image retrieval (CBIR) is in line with the clinical workflow of the radiologists. In a preliminary study, a Euclidean distance based on volume percentages of five lung tissue types was used as inter-case distance for CBIR. The latter showed the feasibility of retrieving similar histological diagnoses of ILD based on visual content, although no localization information was used for CBIR. However, to retrieve and show similar images with pathology appearing at a particular lung position was not possible. In this work, a 3D localization system based on lung anatomy is used to localize low-level features used for CBIR. When compared to our previous study, the introduction of localization features allows improving early precision for some histological diagnoses, especially when the region of appearance of lung tissue disorders is important.

  4. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    PubMed Central

    Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2013-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  5. Performance evaluation of CCD- and mobile-phone-based near-infrared fluorescence imaging systems with molded and 3D-printed phantoms

    NASA Astrophysics Data System (ADS)

    Wang, Bohan; Ghassemi, Pejhman; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2016-03-01

    Increasing numbers of devices are emerging which involve biophotonic imaging on a mobile platform. Therefore, effective test methods are needed to ensure that these devices provide a high level of image quality. We have developed novel phantoms for performance assessment of near infrared fluorescence (NIRF) imaging devices. Resin molding and 3D printing techniques were applied for phantom fabrication. Comparisons between two imaging approaches - a CCD-based scientific camera and an NIR-enabled mobile phone - were made based on evaluation of the contrast transfer function and penetration depth. Optical properties of the phantoms were evaluated, including absorption and scattering spectra and fluorescence excitation-emission matrices. The potential viability of contrastenhanced biological NIRF imaging with a mobile phone is demonstrated, and color-channel-specific variations in image quality are documented. Our results provide evidence of the utility of novel phantom-based test methods for quantifying image quality in emerging NIRF devices.

  6. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

  7. Automated 3D whole-breast ultrasound imaging: results of a clinical pilot study

    NASA Astrophysics Data System (ADS)

    Leproux, Anaïs; van Beek, Michiel; de Vries, Ute; Wasser, Martin; Bakker, Leon; Cuisenaire, Olivier; van der Mark, Martin; Entrekin, Rob

    2010-03-01

    We present the first clinical results of a novel fully automated 3D breast ultrasound system. This system was designed to match a Philips diffuse optical mammography system to enable straightforward coregistration of optical and ultrasound images. During a measurement, three 3D transducers scan the breast at 4 different views. The resulting 12 datasets are registered together into a single volume using spatial compounding. In a pilot study, benign and malignant masses could be identified in the 3D images, however lesion visibility is less compared to conventional breast ultrasound. Clear breast shape visualization suggests that ultrasound could support the reconstruction and interpretation of diffuse optical tomography images.

  8. A miniature high resolution 3-D imaging sonar.

    PubMed

    Josserand, Tim; Wolley, Jason

    2011-04-01

    This paper discusses the design and development of a miniature, high resolution 3-D imaging sonar. The design utilizes frequency steered phased arrays (FSPA) technology. FSPAs present a small, low-power solution to the problem of underwater imaging sonars. The technology provides a method to build sonars with a large number of beams without the proportional power, circuitry and processing complexity. The design differs from previous methods in that the array elements are manufactured from a monolithic material. With this technique the arrays are flat and considerably smaller element dimensions are achievable which allows for higher frequency ranges and smaller array sizes. In the current frequency range, the demonstrated array has ultra high image resolution (1″ range×1° azimuth×1° elevation) and small size (<3″×3″). The design of the FSPA utilizes the phasing-induced frequency-dependent directionality of a linear phased array to produce multiple beams in a forward sector. The FSPA requires only two hardware channels per array and can be arranged in single and multiple array configurations that deliver wide sector 2-D images. 3-D images can be obtained by scanning the array in a direction perpendicular to the 2-D image field and applying suitable image processing to the multiple scanned 2-D images. This paper introduces the 3-D FSPA concept, theory and design methodology. Finally, results from a prototype array are presented and discussed.

  9. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer.

  10. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer. PMID:26276956

  11. Realization of an aerial 3D image that occludes the background scenery.

    PubMed

    Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

    2014-10-01

    In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer.

  12. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  13. Wave-CAIPI for Highly Accelerated 3D Imaging

    PubMed Central

    Bilgic, Berkin; Gagoski, Borjan A.; Cauley, Stephen F.; Fan, Audrey P.; Polimeni, Jonathan R.; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To introduce the Wave-CAIPI (Controlled Aliasing in Parallel Imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. Methods The Wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line, while modifying the 3D phase encoding strategy to incur inter-slice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high quality image reconstruction with Wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and Quantitative Susceptibility Mapping (QSM). Results Wave-CAIPI enables full-brain gradient echo (GRE) acquisition at 1 mm isotropic voxel size and R=3×3 acceleration with maximum g-factors of 1.08 at 3T, and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies 2D-CAIPI and Bunched Phase Encoding, Wave-CAIPI yields up to 2-fold reduction in maximum g-factor for 9-fold acceleration at both field strengths. Conclusion Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. PMID:24986223

  14. Reduction of attenuation effects in 3D transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Frimmel, Hans; Acosta, Oscar; Fenster, Aaron; Ourselin, Sébastien

    2007-03-01

    Ultrasound (US) is one of the most used imaging modalities today as it is cheap, reliable, safe and widely available. There are a number of issues with US images in general. Besides reflections which is the basis of ultrasonic imaging, other phenomena such as diffraction, refraction, attenuation, dispersion and scattering appear when ultrasound propagates through different tissues. The generated images are therefore corrupted by false boundaries, lack of signal for surface tangential to ultrasound propagation, large amount of noise giving rise to local properties, and anisotropic sampling space complicating image processing tasks. Although 3D Transrectal US (TRUS) probes are not yet widely available, within a few years they will likely be introduced in hospitals. Therefore, the improvement of automatic segmentation from 3D TRUS images, making the process independent of human factor is desirable. We introduce an algorithm for attenuation correction, reducing enhancement/shadowing effects and average attenuation effects in 3D US images, taking into account the physical properties of US. The parameters of acquisition such as logarithmic correction are unknown, therefore no additional information is available to restore the image. As the physical properties are related to the direction of each US ray, the 3D US data set is resampled into cylindrical coordinates using a fully automatic algorithm. Enhancement and shadowing effects, as well as average attenuation effects, are then removed with a rescaling process optimizing simultaneously in and perpendicular to the US ray direction. A set of tests using anisotropic diffusion are performed to illustrate the improvement in image quality, where well defined structures are visible. The evolution of both the entropy and the contrast show that our algorithm is a suitable pre-processing step for segmentation tasks.

  15. Imaging thin-bed reservoirs with 3-D seismic

    SciTech Connect

    Hardage, B.A.

    1996-12-01

    This article explains how a 3-D seismic data volume, a vertical seismic profile (VSP), electric well logs and reservoir pressure data can be used to image closely stacked thin-bed reservoirs. This interpretation focuses on the Oligocene Frio reservoir in South Texas which has multiple thin-beds spanning a vertical interval of about 3,000 ft.

  16. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  17. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  18. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  19. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  20. Reducing the influence of direct reflection on return signal detection in a 3D imaging lidar system by rotating the polarizing beam splitter.

    PubMed

    Wang, Chunhui; Lee, Xiaobao; Cui, Tianxiang; Qu, Yang; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-03-01

    The direction rule of the laser beam traveling through a deflected polarizing beam splitter (PBS) cube is derived. It reveals that, due to the influence of end-face reflection of the PBS at the detector side, the emergent beam coming from the incident beam parallels the direction of the original case without rotation, with only a very small translation interval between them. The formula of the translation interval is also given. Meanwhile, the emergent beam from the return signal at the detector side deflects at an angle twice that of the PBS rotation angle. The correctness has been verified by an experiment. The intensity transmittance of the emergent beam when propagating in the PBS is changes very little if the rotation angle is less than 35 deg. In a 3D imaging lidar system, by rotating the PBS cube by an angle, the direction of the return signal optical axis is separated from that of the origin, which can decrease or eliminate the influence of direct reflection caused by the prism end face on target return signal detection. This has been checked by experiment. PMID:26974613

  1. 3D Winding Number: Theory and Application to Medical Imaging

    PubMed Central

    Becciu, Alessandro; Fuster, Andrea; Pottek, Mark; van den Heuvel, Bart; ter Haar Romeny, Bart; van Assen, Hans

    2011-01-01

    We develop a new formulation, mathematically elegant, to detect critical points of 3D scalar images. It is based on a topological number, which is the generalization to three dimensions of the 2D winding number. We illustrate our method by considering three different biomedical applications, namely, detection and counting of ovarian follicles and neuronal cells and estimation of cardiac motion from tagged MR images. Qualitative and quantitative evaluation emphasizes the reliability of the results. PMID:21317978

  2. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  3. 2D/3D image (facial) comparison using camera matching.

    PubMed

    Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

    2006-11-10

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

  4. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    NASA Astrophysics Data System (ADS)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  5. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  6. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  7. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    PubMed Central

    Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I

    2016-01-01

    Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods. PMID:27170842

  8. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  9. Optical-CT imaging of complex 3D dose distributions

    NASA Astrophysics Data System (ADS)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  10. 3D robust digital image correlation for vibration measurement.

    PubMed

    Chen, Zhong; Zhang, Xianmin; Fatikow, Sergej

    2016-03-01

    Discrepancies of speckle images under dynamic measurement due to the different viewing angles will deteriorate the correspondence in 3D digital image correlation (3D-DIC) for vibration measurement. Facing this kind of bottleneck, this paper presents two types of robust 3D-DIC methods for vibration measurement, SSD-robust and SWD-robust, which use a sum of square difference (SSD) estimator plus a Geman-McClure regulating term and a Welch estimator plus a Geman-McClure regulating term, respectively. Because the regulating term with an adaptive rejecting bound can lessen the influence of the abnormal pixel data in the dynamical measuring process, the robustness of the algorithm is enhanced. The robustness and precision evaluation experiments using a dual-frequency laser interferometer are implemented. The experimental results indicate that the two presented robust estimators can suppress the effects of the abnormality in the speckle images and, meanwhile, keep higher precision in vibration measurement in contrast with the traditional SSD method; thus, the SWD-robust and SSD-robust methods are suitable for weak image noise and strong image noise, respectively. PMID:26974624

  11. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  12. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  13. Geodetic imaging of potential seismogenic asperities on the Xianshuihe-Anninghe-Zemuhe fault system, southwest China, with a new 3-D viscoelastic interseismic coupling model

    NASA Astrophysics Data System (ADS)

    Jiang, Guoyan; Xu, Xiwei; Chen, Guihua; Liu, Yajing; Fukahata, Yukitoshi; Wang, Hua; Yu, Guihua; Tan, Xibin; Xu, Caijun

    2015-03-01

    We use GPS and interferometric synthetic aperture radar (InSAR) measurements to image the spatial variation of interseismic coupling on the Xianshuihe-Anninghe-Zemuhe (XAZ) fault system. A new 3-D viscoelastic interseismic deformation model is developed to infer the rotation and strain rates of blocks, postseismic viscoelastic relaxation, and interseismic slip deficit on the fault surface discretized with triangular dislocation patches. The inversions of synthetic data show that the optimal weight ratio and smoothing factor are both 1. The successive joint inversions of geodetic data with different viscosities reveal six potential fully coupled asperities on the XAZ fault system. Among them, the potential asperity between Shimian and Mianning, which does not exist in the case of 1019 Pa s, is confirmed by the published microearthquake depth profile. Besides, there is another potential partially coupled asperity between Daofu and Kangding with a length scale up to 140 km. All these asperity sizes are larger than the minimum resolvable wavelength. The minimum and maximum slip deficit rates near the Moxi town are 7.0 and 12.7 mm/yr, respectively. Different viscosities have little influence on the roughness of the slip deficit rate distribution and the fitting residuals, which probably suggests that our observations cannot provide a good constraint on the viscosity of the middle lower crust. The calculation of seismic moment accumulation on each segment indicates that the Songlinkou-Selaha (S4), Shimian-Mianning (S7), and Mianning-Xichang (S8) segments are very close to the rupture of characteristic earthquakes. However, the confidence level is confined by sparse near-fault observations.

  14. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging

    PubMed Central

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-01-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm. PMID:27375935

  15. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging.

    PubMed

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-06-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm.

  16. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  17. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  18. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day. PMID:23609635

  19. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  20. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  1. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

  2. Handheld camera 3D modeling system using multiple reference panels

    NASA Astrophysics Data System (ADS)

    Fujimura, Kouta; Oue, Yasuhiro; Terauchi, Tomoya; Emi, Tetsuichi

    2002-03-01

    A novel 3D modeling system in which a target object is easily captured and modeled by using a hand-held camera with several reference panels is presented in this paper. The reference panels are designed to be able to obtain the camera position and discriminate between each other. A conventional 3D modeling system using a reference panel has several restrictions regarding the target object, specifically the size and its location. Our system uses multiple reference panels, which are set around the target object to remove these restrictions. The main features of this system are as follows: 1) The whole shape and photo-realistic textures of the target object can be digitized based on several still images or a movie captured by using a hand-held camera; as well as each location of the camera that can be calculated using the reference panels. 2) Our system can be provided as a software product only. That means there are no special requirements for hardware; even the reference panels , because they can be printed from image files or software. 3) This system can be applied to digitize a larger object. In the experiments, we developed and used an interactive region selection tool to detect the silhouette on each image instead of using the chroma -keying method. We have tested our system with a toy object. The calculation time is about 10 minutes (except for the capturing the images and extracting the silhouette by using our tool) on a personal computer with a Pentium-III processor (600MHz) and 320MB memory. However, it depends on how complex the images are and how many images you use. Our future plan is to evaluate the system with various kind of objects, specifically, large ones in outdoor environments.

  3. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  4. 3-D Mesh Generation Nonlinear Systems

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surfacemore » equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.« less

  5. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  6. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  7. Combined elasticity and 3D imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Li, Yinbo; Hossack, John A.

    2005-04-01

    A method is described for repeatably assessing elasticity and 3D extent of suspected prostate cancers. Elasticity is measured by controlled water inflation of a sheath placed over a modified transrectal ultrasound transducer. The benefit of using fluid inflation is that it should be possible to make repeatable, accurate, measurements of elasticity that are of interest in the serial assessment of prostate cancer progression or remission. The second aspect of the work uses auxiliary tracking arrays placed at each end of the central imaging array that allow the transducer to be rotated while simultaneously collected 'tracking' information thus allowing the position of successive image planes to be located with approximately 11% volumetric accuracy in 3D space. In this way, we present a technique for quantifying volumetric extent of suspected cancer in addition to making measures of elastic anomalies.

  8. 3D reconstruction of concave surfaces using polarisation imaging

    NASA Astrophysics Data System (ADS)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  9. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004

  10. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  11. 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D

    NASA Astrophysics Data System (ADS)

    Choi, Sung-Ja; Lee, Gang-Soo

    The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].

  12. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  13. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  14. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  15. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  16. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation

    PubMed Central

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-01-01

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  17. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  18. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    PubMed

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  19. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  20. A shortcut to align 3D images captured from multiple views

    NASA Astrophysics Data System (ADS)

    Heng, Wei; Wang, Hao

    2008-11-01

    In order to get whole shape of an object, lots of parts of 3D images need to be captured from multiple views and aligned into a same 3D coordinate. That usually involves in both complex software process and expensive hardware system. In this paper, a shortcut approach is proposed to align 3D images captured from multiple views. Employing only a calibrated turntable, a single-view 3D camera can capture a sequence of 3D images of an object from different view angle one by one, then align them quickly and automatically. The alignment doesn't need any help from the operator. It can achieve good performances such as high accuracy, robust, rapidly capturing and low cost. The turntable calibration can be easily implemented by the single-view 3D camera. Fixed with the turntable, single-view 3D camera can calibrate the revolving-axis of the turntable just by measuring the positions of a little calibration-ball revolving with the turntable at several angles. Then system can get the coordinate transformation formula between multiple views of different revolving angle by a LMS algorithm. The formulae for calibration and alignment are given with the precision analysis. Experiments were performed and showed effective result to recover 3D objects.

  1. Random-profiles-based 3D face recognition system.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Lee, Sangyoun

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  2. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  3. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  4. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  5. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  6. Animal testing using 3D microwave tomography system for breast cancer detection.

    PubMed

    Lee, Jong Moon; Son, Sung Ho; Kim, Hyuk Je; Kim, Bo Ra; Choi, Heyng Do; Jeon, Soon Ik

    2014-01-01

    The three dimensional microwave tomography (3D MT) system of the Electronics and Telecommunications Research Institute (ETRI) comprises an antenna array, transmitting receiving module, switch matrix module and a signal processing component. This system also includes a patient interface bed as well as a 3D reconstruction algorithm. Here, we perform a comparative analysis of image reconstruction results using the assembled system and MRI results, which is used to image the breasts of dogs. Microwave imaging reconstruction results (at 1,500 MHz) obtained using the ETRI 3D MT system are presented. The system provides computationally reliable diagnosis results from the reconstructed MT Image. PMID:25160233

  7. 3D object recognition using kernel construction of phase wrapped images

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Su, Hongjun

    2011-06-01

    Kernel methods are effective machine learning techniques for many image based pattern recognition problems. Incorporating 3D information is useful in such applications. The optical profilometries and interforometric techniques provide 3D information in an implicit form. Typically phase unwrapping process, which is often hindered by the presence of noises, spots of low intensity modulation, and instability of the solutions, is applied to retrieve the proper depth information. In certain applications such as pattern recognition problems, the goal is to classify the 3D objects in the image, rather than to simply display or reconstruct them. In this paper we present a technique for constructing kernels on the measured data directly without explicit phase unwrapping. Such a kernel will naturally incorporate the 3D depth information and can be used to improve the systems involving 3D object analysis and classification.

  8. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  9. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  10. 3D super-resolution imaging with blinking quantum dots.

    PubMed

    Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R

    2013-11-13

    Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (fwhm) of 8-17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3-7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells.

  11. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  12. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  13. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  14. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  15. 3D subcellular SIMS imaging in cryogenically prepared single cells

    NASA Astrophysics Data System (ADS)

    Chandra, Subhash

    2004-06-01

    The analysis of a cell with dynamic SIMS ion microscopy depends on the gradual erosion (sputtering) of the cell surface for obtaining spatially resolved chemical information in the X-, Y-, and Z-dimensions. This ideal feature of ion microscopy is rarely explored in probing microfeatures hidden beneath the cell surface. In this study, this capability is explored for the analysis of cells undergoing cell division. The mitotic cells required 3D SIMS imaging in order to study the chemical composition of specialized subcellular regions, like the mitotic spindle, hidden beneath the cell surface. Human glioblastoma T98G cells were grown on silicon chips and cryogenically prepared with a sandwich freeze-fracture method. The fractured freeze-dried cells were used for SIMS analysis with the microscope mode of the CAMECA IMS-3f, which is capable of producing 500 nm lateral image resolution. SIMS analysis of calcium in the spindle region of metaphase cells required sequential recording of as many as 10 images. The T98G human glioblastoma tumor cells revealed an unusual depletion/lack of calcium store in the metaphase spindle, which is in contrast to the accumulation of calcium stores generally observed in normal cells. This study shows the feasibility of the microscope mode imaging in resolving subcellular microfeatures in 3D and opens new avenues of research in spatially resolved chemical analysis of dividing cells.

  16. Multiple 2D video/3D medical image registration algorithm

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    2000-06-01

    In this paper we propose a novel method to register at least two vide images to a 3D surface model. The potential applications of such a registration method could be in image guided surgery, high precision radiotherapy, robotics or computer vision. Registration is performed by optimizing a similarity measure with respect to the pose parameters. The similarity measure is based on 'photo-consistency' and computes for each surface point, how consistent the corresponding video image information in each view is with a lighting model. We took four video views of a volunteer's face, and used an independent method to reconstruct a surface that was intrinsically registered to the four views. In addition, we extracted a skin surface from the volunteer's MR scan. The surfaces were misregistered from a gold standard pose and our algorithm was used to register both types of surfaces to the video images. For the reconstructed surface, the mean 3D error was 1.53 mm. For the MR surface, the standard deviation of the pose parameters after registration ranged from 0.12 to 0.70 mm and degrees. The performance of the algorithm is accurate, precise and robust.

  17. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  18. Advanced 3-D analysis, client-server systems, and cloud computing—Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement

    PubMed Central

    Zimmermann, Mathis; Falkner, Juergen

    2013-01-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  19. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  20. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  1. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  2. 3D KLT compression algorithm for camera security systems

    NASA Astrophysics Data System (ADS)

    Fritsch, Lukás; Páta, Petr

    2008-11-01

    This paper deals with an image compression algorithm based on the three- dimensional Karhunen- Loeve transform (3D KLT), whose task is to reduce time redundancy for an image data. There are many cases for which the reduction of time redundancy is very efficient and brings perceptible bitstream reduction. This is very desirable for transfering image data through telephone links, GSM networks etc. The time redundancy is very perceptible e.g. in camera security systems where relative unchanging frames are very conventional. The time evolution of grabbed scene is reviews according an energy content in eigenimages. These eigenimages are obtained in KLT for a suitable number of incoming frames. Required number of transfered eigenimages and eigenvectors is determined on the basis of the energy content in eigenimages.

  3. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

  4. A 3D polarizing display system base on backlight control

    NASA Astrophysics Data System (ADS)

    Liu, Pu; Huang, Ziqiang

    2011-08-01

    In this paper a new three-dimensional (3D) liquid crystal display (LCD) display mode based on backlight control is presented to avoid the left and right eye images crosstalk in 3D display. There are two major issues in this new black frame 3D display mode. One is continuously playing every frame images twice. The other is controlling the backlight switch periodically. First, this paper explains the cause of the left and right eye images crosstalk, and presents a solution to avoid this problem. Then, we propose to play the entire frame images twice by repeating each frame image after it was played instead of playing the left images and the right images frame by frame alternately. Finally, the backlight is switched periodically instead of turned on all the time. The backlight is turned off while a frame of image is played for the first time, then turned on during the second time, after that it will be turned off again and run the next period with the next frame of image start to refresh. Controlling the backlight switch periodically like this is the key to achieve the black frame 3D display mode. This mode can not only achieve better 3D display effect by avoid the left and right image crosstalk, but also save the backlight power consumption. Theoretical analysis and experiments show that our method is reasonable and efficient.

  5. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields. PMID:26285181

  6. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  7. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  8. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  9. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  10. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  11. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  12. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    NASA Astrophysics Data System (ADS)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  13. Distortion-free wide-angle 3D imaging and visualization using off-axially distributed image sensing.

    PubMed

    Zhang, Miao; Piao, Yongri; Kim, Nam-Woo; Kim, Eun-Soo

    2014-07-15

    We propose a new off-axially distributed image sensing (ODIS) using a wide-angle lens for reconstructing distortion-free wide-angle slice images computationally. In the proposed system, the wide-angle image sensor captures a wide-angle 3D scene, and thus the collected information of the 3D objects is severely distorted. To correct this distortion, we introduce a new correction process involving a wide-angle lens to the computational reconstruction in ODIS. This enables us to reconstruct distortion-free, wide-angle slice images for visualization of 3D objects. Experimental results are carried out to verify the proposed method. To the best of our knowledge, this is the first time the use of a wide-angle lens in a multiple-perspective 3D imaging system is described.

  14. Simulation of a new 3D imaging sensor for identifying difficult military targets

    NASA Astrophysics Data System (ADS)

    Harvey, Christophe; Wood, Jonathan; Randall, Peter; Watson, Graham; Smith, Gordon

    2008-04-01

    This paper reports the successful application of automatic target recognition and identification (ATR/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for targeting operations from fast jet platforms, and is currently being integrated with an ATR/I suite for demonstration and testing. The sensor has been extensively modelled and a set of high fidelity simulated imagery has been generated using the CAMEO-SIM scene generation software tool. These include a variety of different scenarios (varying range, platform altitude, target orientation and environments), and some 'difficult' targets such as concealed military vehicles. The ATR/I algorithms have been tested on this image set and their performance compared to 2D passive imagery from the airborne trials using a Wescam MX-15 infrared sensor and real-time ATR/I suite. This paper outlines the principles behind the sensor model and the methodology of 3D scene simulation. An overview of the 3D ATR/I programme and algorithms is presented, and the relative performance of the ATR/I against the simulated image set is reported. Comparisons are made to the performance of typical 2D sensors, confirming the benefits of 3D imaging for targeting applications.

  15. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  16. Computer-aided 3D-shape construction of hearts from CT images for rapid prototyping

    NASA Astrophysics Data System (ADS)

    Fukuzawa, Masayuki; Kato, Yutaro; Nakamori, Nobuyuki; Ozawa, Seiichiro; Shiraishi, Isao

    2012-03-01

    By developing a computer-aided modeling system, the 3D shapes of infant's heart have been constructed interactively from quality-limited CT images for rapid prototyping of biomodels. The 3D model was obtained by following interactive steps: (1) rough region cropping, (2) outline extraction in each slice with locally-optimized threshold, (3) verification and correction of outline overlap, (4) 3D surface generation of inside wall, (5) connection of inside walls, (6) 3D surface generation of outside wall, (7) synthesis of self-consistent 3D surface. The manufactured biomodels revealed characteristic 3D shapes of heart such as left atrium and ventricle, aortic arch and right auricle. Their real shape of cavity and vessel is suitable for surgery planning and simulation. It is a clear advantage over so-called "blood-pool" model which is massive and often found in 3D visualization of CT images as volume rendering perspective. The developed system contributed both to quality improvement and to modeling-time reduction, which may suggest a practical approach to establish a routine process for manufacturing heart biomodels. Further study on the system performance is now still in progress.

  17. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  18. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  19. Probabilistic models and numerical calculation of system matrix and sensitivity in list-mode MLEM 3D reconstruction of Compton camera images.

    PubMed

    Maxim, Voichita; Lojacono, Xavier; Hilaire, Estelle; Krimmer, Jochen; Testa, Etienne; Dauvergne, Denis; Magnin, Isabelle; Prost, Rémy

    2016-01-01

    This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images.

  20. Multi-fields direct design approach in 3D: calculating a two-surface freeform lens with an entrance pupil for line imaging systems.

    PubMed

    Nie, Yunfeng; Thienpont, Hugo; Duerr, Fabian

    2015-12-28

    Including an entrance pupil in optical systems provides clear benefits for balancing the overall performance of freeform and/or rotationally symmetric imaging systems. Current existing direct design methods that are based on perfect imaging of few discrete ray bundles are not well suited for wide field of view systems. In this paper, a three-dimensional multi-fields direct design approach is proposed to balance the full field imaging performance of a two-surface freeform lens. The optical path lengths and image points of numerous fields are calculated during the procedures, wherefore very few initial parameters are needed in advance. Design examples of a barcode scanner lens as well as a line imaging objective are introduced to demonstrate the effectiveness of this method. PMID:26832061

  1. Computation of optimized arrays for 3-D electrical imaging surveys

    NASA Astrophysics Data System (ADS)

    Loke, M. H.; Wilkinson, P. B.; Uhlemann, S. S.; Chambers, J. E.; Oxby, L. S.

    2014-12-01

    3-D electrical resistivity surveys and inversion models are required to accurately resolve structures in areas with very complex geology where 2-D models might suffer from artefacts. Many 3-D surveys use a grid where the number of electrodes along one direction (x) is much greater than in the perpendicular direction (y). Frequently, due to limitations in the number of independent electrodes in the multi-electrode system, the surveys use a roll-along system with a small number of parallel survey lines aligned along the x-direction. The `Compare R' array optimization method previously used for 2-D surveys is adapted for such 3-D surveys. Offset versions of the inline arrays used in 2-D surveys are included in the number of possible arrays (the comprehensive data set) to improve the sensitivity to structures in between the lines. The array geometric factor and its relative error are used to filter out potentially unstable arrays in the construction of the comprehensive data set. Comparisons of the conventional (consisting of dipole-dipole and Wenner-Schlumberger arrays) and optimized arrays are made using a synthetic model and experimental measurements in a tank. The tests show that structures located between the lines are better resolved with the optimized arrays. The optimized arrays also have significantly better depth resolution compared to the conventional arrays.

  2. Pseudo-3D Imaging With The DICOM-8

    NASA Astrophysics Data System (ADS)

    Shalev, S.; Arenson, J.; Kettner, B.

    1985-09-01

    We have developed the DICOM.-8 digital imaging computer for video image acquisition, processing and display. It is a low-cost mobile systems based on a Z80 microcomputer which controls access to two 512 x 512 x 8-bit image planes through a real-time video arithmetic unit. Image presentation capabilities include orthographic images, isometric plots with hidden-line suppression, real-time mask subtraction, binocular red/green stereo, and volumetric imaging with both geometrical and density windows under operator interactive control. Examples are shown for multiplane series of CT images.

  3. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  4. Compensation of log-compressed images for 3-D ultrasound.

    PubMed

    Sanches, João M; Marques, Jorge S

    2003-02-01

    In this study, a Bayesian approach was used for 3-D reconstruction in the presence of multiplicative noise and nonlinear compression of the ultrasound (US) data. Ultrasound images are often considered as being corrupted by multiplicative noise (speckle). Several statistical models have been developed to represent the US data. However, commercial US equipment performs a nonlinear image compression that reduces the dynamic range of the US signal for visualization purposes. This operation changes the distribution of the image pixels, preventing a straightforward application of the models. In this paper, the nonlinear compression is explicitly modeled and considered in the reconstruction process, where the speckle noise present in the radio frequency (RF) US data is modeled with a Rayleigh distribution. The results obtained by considering the compression of the US data are then compared with those obtained assuming no compression. It is shown that the estimation performed using the nonlinear log-compression model leads to better results than those obtained with the Rayleigh reconstruction method. The proposed algorithm is tested with synthetic and real data and the results are discussed. The results have shown an improvement in the reconstruction results when the compression operation is included in the image formation model, leading to sharper images with enhanced anatomical details.

  5. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  6. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  7. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    PubMed

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models.

  8. Real-time cylindrical curvilinear 3-D ultrasound imaging.

    PubMed

    Pua, E C; Yen, J T; Smith, S W

    2003-07-01

    In patients who are obese or exhibit signs of pulmonary disease, standard transthoracic scanning may yield poor quality cardiac images. For these conditions, two-dimensional transesophageal echocardiography (TEE) is established as an essential diagnostic tool. Current techniques in transesophageal scanning, though, are limited by incomplete visualization of cardiac structures in close proximity to the transducer. Thus, we propose a 2D curvilinear array for 3D transesophageal echocardiography in order to widen the field of view and increase visualization close to the transducer face. In this project, a 440 channel 5 MHz two-dimensional array with a 12.6 mm aperture diameter on a flexible interconnect circuit has been molded to a 4 mm radius of curvature. A 75% element yield was achieved during fabrication and an average -6dB bandwidth of 30% was observed in pulse-echo tests. Using this transducer in conjunction with modifications to the beam former delay software and scan converter display software of the our 3D scanner, we obtained cylindrical real-time curvilinear volumetric scans of tissue phantoms, including a field of view of greater than 120 degrees in the curved, azimuth direction and 65 degrees phased array sector scans in the elevation direction. These images were achieved using a stepped subaperture across the cylindrical curvilinear direction of the transducer face and phased array sector scanning in the noncurved plane. In addition, real-time volume rendered images of a tissue mimicking phantom with holes ranging from 1 cm to less than 4 mm have been obtained. 3D color flow Doppler results have also been acquired. This configuration can theoretically achieve volumes displaying 180 degrees by 120 degrees. The transducer is also capable of obtaining images through a curvilinear stepped subaperture in azimuth in conjunction with a rectilinear stepped subaperture in elevation, further increasing the field of view close to the transducer face. Future work

  9. Excision of nasopharyngeal angiofibroma facilitated by intra-operative 3D-image guidance.

    PubMed

    Murray, A; Falconer, M; McGarry, G W

    2000-04-01

    The latest 3D-image guidance systems to assist surgeons have greatly improved over earlier models. We describe the use of an optical infra-red system to assist in the removal of a juvenile nasopharyngeal angiofibroma. The specific advantages of this system in pre-operative assessment, intra-operative evaluation and excision of the angiofibroma are discussed.

  10. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    NASA Astrophysics Data System (ADS)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  11. Repositioning accuracy of two different mask systems-3D revisited: Comparison using true 3D/3D matching with cone-beam CT

    SciTech Connect

    Boda-Heggemann, Judit . E-mail: judit.boda-heggemann@radonk.ma.uni-heidelberg.de; Walter, Cornelia; Rahn, Angelika; Wertz, Hansjoerg; Loeb, Iris; Lohr, Frank; Wenz, Frederik

    2006-12-01

    Purpose: The repositioning accuracy of mask-based fixation systems has been assessed with two-dimensional/two-dimensional or two-dimensional/three-dimensional (3D) matching. We analyzed the accuracy of commercially available head mask systems, using true 3D/3D matching, with X-ray volume imaging and cone-beam CT. Methods and Materials: Twenty-one patients receiving radiotherapy (intracranial/head-and-neck tumors) were evaluated (14 patients with rigid and 7 with thermoplastic masks). X-ray volume imaging was analyzed online and offline separately for the skull and neck regions. Translation/rotation errors of the target isocenter were analyzed. Four patients were treated to neck sites. For these patients, repositioning was aided by additional body tattoos. A separate analysis of the setup error on the basis of the registration of the cervical vertebra was performed. The residual error after correction and intrafractional motility were calculated. Results: The mean length of the displacement vector for rigid masks was 0.312 {+-} 0.152 cm (intracranial) and 0.586 {+-} 0.294 cm (neck). For the thermoplastic masks, the value was 0.472 {+-} 0.174 cm (intracranial) and 0.726 {+-} 0.445 cm (neck). Rigid masks with body tattoos had a displacement vector length in the neck region of 0.35 {+-} 0.197 cm. The intracranial residual error and intrafractional motility after X-ray volume imaging correction for rigid masks was 0.188 {+-} 0.074 cm, and was 0.134 {+-} 0.14 cm for thermoplastic masks. Conclusions: The results of our study have demonstrated that rigid masks have a high intracranial repositioning accuracy per se. Given the small residual error and intrafractional movement, thermoplastic masks may also be used for high-precision treatments when combined with cone-beam CT. The neck region repositioning accuracy was worse than the intracranial accuracy in both cases. However, body tattoos and image guidance improved the accuracy. Finally, the combination of both mask

  12. Rapid prototyping with optical 3D measurement systems

    NASA Astrophysics Data System (ADS)

    Gaessler, J.; Blount, G. N.; Jones, R. M.

    1994-11-01

    One of the important tools for speeding up the prototyping of an new industrial or consumer product is the rapid generation of CAD data from hand-made styling models and moulds. We present a new optical 3D digitizing system which produces in a fully automatic way non- ambiguous, absolute and complete surface coordinate data of very complex objects in a short time. The system named `OptoShape' is based on a projection of sinusoidal fringes with a true grey-level matrix projector. The system measures both non-ambiguous and absolute XYZ surface data with a pronounced robustness towards optical surface properties. By moving the 3D sensor head around the object to be digitized with a 3/5 axes manipulator, multiple range images are obtained and automatically merged into a unified cloud of point coordinates. This set of surface coordinates are transferred to a software package where interactive manipulation, sectioning and semi-automatic generation of CAD surface descriptions are performed. CNC data can also be directly generated from the point surface coordinate data set.

  13. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  14. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  15. Brain surface maps from 3-D medical images

    NASA Astrophysics Data System (ADS)

    Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

    1991-06-01

    The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

  16. 3D image copyright protection based on cellular automata transform and direct smart pixel mapping

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Kim, Seok-Tae; Lee, In-Kwon

    2014-10-01

    We propose a three-dimensional (3D) watermarking system with the direct smart pixel mapping algorithm to improve the resolution of the reconstructed 3D watermark plane images. The depth-converted elemental image array (EIA) is obtained through the computational pixel mapping method. In the watermark embedding process, the depth-converted EIA is first scrambled by using the Arnold transform, which is then embedded in the middle frequency of the cellular automata (CA) transform. Compared with conventional computational integral imaging reconstruction (CIIR) methods, this proposed scheme gives us a higher resolution of the reconstructed 3D plane images by using the quality-enhanced depth-converted EIA. The proposed method, which can obtain many transform planes for embedding watermark data, uses CA transforms with various gateway values. To prove the effectiveness of the proposed method, we present the results of our preliminary experiments.

  17. Constraining 3D Process Sedimentological Models to Geophysical Data Using Image Quilting

    NASA Astrophysics Data System (ADS)

    Tahmasebi, P.; Da Pra, A.; Pontiggia, M.; Caers, J.

    2014-12-01

    3D process geological models, whether for carbonate or sedimentological systems, have been proposed for modeling realistic subsurface heterogeneity. The problem with such forward process models is that they are not constrained to any subsurface data whether to wells or geophysical surveys. We propose a new method for realistic geological modeling of complex heterogeneity by hybridizing 3D process modeling of geological deposition with conditioning by means of a novel multiple-point geostatistics (MPS) technique termed image quilting (IQ). Image quilting is a pattern-based techniques that stiches together patterns extracted from training images to generate stochastic realizations that look like the training image. In this paper, we illustrate how 3D process model realizations can be used as training images in image quilting. To constrain the realization to seismic data we first interpret each facies in the geophysical data. These interpretation, while overly smooth and not reflecting finer scale variation are used as auxiliary variables in the generation of the image quilting realizations. To condition to well data, we first perform a kriging of the well data to generate a kriging map and kriging variance. The kriging map is used as additional auxiliary variable while the kriging variance is used as a weight given to the kriging derived auxiliary variable. We present an application to a giant offshore reservoir. Starting from seismic advanced attribute analysis and sedimentological interpretation, we build the 3D sedimentological process based model and use it as non-stationary training image for conditional image quilting.

  18. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  19. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  20. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  1. Near field 3D scene simulation for passive microwave imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wu, Ji

    2006-10-01

    Scene simulation is a necessary work in near field passive microwave remote sensing. A 3-D scene simulation model of microwave radiometric imaging based on ray tracing method is present in this paper. The essential influencing factors and general requirements are considered in this model such as the rough surface radiation, the sky radiation witch act as the uppermost illuminator in out door circumstance, the polarization rotation of the temperature rays caused by multiple reflections, and the antenna point spread function witch determines the resolution of the model final outputs. Using this model we simulate a virtual scene and analyzed the appeared microwave radiometric phenomenology, at last two real scenes of building and airstrip were simulated for validating the model. The comparison between the simulation and field measurements indicates that this model is completely feasible in practice. Furthermore, we analyzed the signatures of model outputs, and achieved some underlying phenomenology of microwave radiation witch is deferent with that in optical and infrared bands.

  2. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  3. 3D surface scan of biological samples with a Push-broom Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Kincaid, Russell; Hruska, Zuzana; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2013-08-01

    The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made significant progress in recent years. Specifically, hyperspectral imaging has shown its potential for surface contamination detection in many food related applications. Most existing hyperspectral imaging systems use pushbroom scanning which is generally used for flat surface inspection. In some applications it is desirable to be able to acquire hyperspectral images on circular objects such as corn ears, apples, and cucumbers. Past research describes inspection systems that examine all surfaces of individual objects. Most of these systems did not employ hyperspectral imaging. These systems typically utilized a roller to rotate an object, such as an apple. During apple rotation, the camera took multiple images in order to cover the complete surface of the apple. The acquired image data lacked the spectral component present in a hyperspectral image. This paper discusses the development of a hyperspectral imaging system for a 3-D surface scan of biological samples. The new instrument is based on a pushbroom hyperspectral line scanner using a rotational stage to turn the sample. The system is suitable for whole surface hyperspectral imaging of circular objects. In addition to its value to the food industry, the system could be useful for other applications involving 3-D surface inspection.

  4. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  5. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  6. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    NASA Astrophysics Data System (ADS)

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  7. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  8. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©. PMID:26921815

  9. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©.

  10. 3-D Ultrafast Doppler Imaging Applied to the Noninvasive and Quantitative Imaging of Blood Vessels in Vivo

    PubMed Central

    Provost, J.; Papadacci, C.; Demene, C.; Gennisson, J-L.; Tanter, M.; Pernot, M.

    2016-01-01

    Ultrafast Doppler Imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D Ultrafast Ultrasound Imaging, a technique that can produce thousands of ultrasound volumes per second, based on three-dimensional plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that non-invasive 3-D Ultrafast Power Doppler, Pulsed Doppler, and Color Doppler Imaging can be used to perform quantitative imaging of blood vessels in humans when using coherent compounding of three-dimensional tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D Ultrafast Imaging. Using a 32X32, 3-MHz matrix phased array (Vermon, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. 3-D Ultrafast Power Doppler Imaging was first validated by imaging Tygon tubes of varying diameter and its in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D Color and Pulsed Doppler Imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer. PMID:26276956

  11. A fast 3D reconstruction system with a low-cost camera accessory

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-06-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  12. A fast 3D reconstruction system with a low-cost camera accessory.

    PubMed

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  13. A fast 3D reconstruction system with a low-cost camera accessory

    PubMed Central

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-01-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407

  14. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  15. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  16. Synthetic 3D multicellular systems for drug development.

    PubMed

    Rimann, Markus; Graf-Hausner, Ursula

    2012-10-01

    Since the 1970s, the limitations of two dimensional (2D) cell culture and the relevance of appropriate three dimensional (3D) cell systems have become increasingly evident. Extensive effort has thus been made to move cells from a flat world to a 3D environment. While 3D cell culture technologies are meanwhile widely used in academia, 2D culture technologies are still entrenched in the (pharmaceutical) industry for most kind of cell-based efficacy and toxicology tests. However, 3D cell culture technologies will certainly become more applicable if biological relevance, reproducibility and high throughput can be assured at acceptable costs. Most recent innovations and developments clearly indicate that the transition from 2D to 3D cell culture for industrial purposes, for example, drug development is simply a question of time.

  17. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    SciTech Connect

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  18. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  19. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time

  20. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  1. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  2. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  3. Integral imaging as a modality for 3D TV and displays

    NASA Astrophysics Data System (ADS)

    McCormick, Malcolm; Davies, Neil A.; Milnthorpe, Graham; Aggoun, Amar; Forman, Matthew C.

    2002-11-01

    The development of 3D TV systems and displays for public use require that several important criteria be satisfied. The criteria are that the perceived resolution is as good as existing 2D TV, the image must be in full natural colour, compatibility with current 2D systems in terms of frame rate and transmission data must be ensured, human-factors concerns must be satisfied and seamless autostereoscopic viewing provided. There are several candidate 3D technologies, for example stereoscopic multiview, holographic and integral imaging that endeavor to satisfy the technological and other conditions. The perceived advantages of integral imaging are that the 3D data can be captured by a single aperture camera, the display is a scaled 3D optical model, and in viewing accommodation and convergence are as in normal sighting (natural) thereby preventing possible eye strain. Consequently it appears to be ideal for prolonged human use. The technological factors that inhibited the possible use of integral imaging for TV display have been shown to be less intractable than at first thought. For example compression algorithms are available such that terrestrial bandwidth is perfectly suitable for transmission purposes. Real-time computer generation of integral images is feasible and the high-resolution LCD panels currently available are sufficient to enable high contrast and high quality image display.

  4. Deformable M-Reps for 3D Medical Image Segmentation.

    PubMed

    Pizer, Stephen M; Fletcher, P Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z; Fridman, Yonatan; Fritsch, Daniel S; Gash, Graham; Glotzer, John M; Jiroutek, Michael R; Lu, Conglin; Muller, Keith E; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L

    2003-11-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures - each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.

  5. a 3d Campus Information System - Initial Studies

    NASA Astrophysics Data System (ADS)

    Kahraman, I.; Karas, I. R.; Alizadehasharfi, B.; Abdul-Rahman, A.

    2013-08-01

    This paper discusses the method of developing Campus Information System. The system can handle 3D spatial data within desktop and web environment. The method consists of texturing of building facades for 3D building models and modeling 3D Campus Information System. In this paper, some of these steps are carried out; modelling 3D buildings, toggling these models on the terrain and ortho-photo, integration with a geo-database, transferring to the CityServer3D environment by using CityGML format and designing the service, etc. In addition to this, a simple but novel method of texturing of building façades for 3D city modeling that is based on Dynamic Pulse Function (DPF) is used for synthetic and procedural texturing. DPF is very fast compared to other photo realistic texturing methods. Last but not least, it is aimed to present this project on web using web mapping services. This makes 3D analysis easy for decision makers.

  6. 3D imaging of enzymes working in situ.

    PubMed

    Jamme, F; Bourquin, D; Tawil, G; Viksø-Nielsen, A; Buléon, A; Réfrégiers, M

    2014-06-01

    Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

  7. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  8. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images.

  9. Thin client performance for remote 3-D image display.

    PubMed

    Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin

    2003-01-01

    Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.

  10. High-resolution 3D imaging laser radar flight test experiments

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Davis, W. R.; Rich, G. C.; McLaughlin, J. L.; Lee, E. I.; Stanley, B. M.; Burnside, J. W.; Rowe, G. S.; Hatch, R. E.; Square, T. E.; Skelly, L. J.; O'Brien, M.; Vasile, A.; Heinrichs, R. M.

    2005-05-01

    Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode

  11. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  12. Gel tomography for 3D acquisition of plant root systems

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.; Heyenga, Anthony G.

    1998-03-01

    A system for three-dimensional, non-destructive acquisition of the structure of plant root systems is described. The plants are grown in a transparent medium (a 'gel pack') and are then placed on a rotating stage. The stage is rotated in 5-degree increments while images are captured using either traditional photography or a CCD camera. The individual images are then used as input to a tomographic (backprojection) algorithm to recover the original volumetric data. This reconstructed volume is then used as input to a 3D-reconstruction system. The software performs segmentation and mesh generation to derive a tessellated mesh of the root structure. This mesh can then be visualized using computer graphics, or used to derive measurements of root thickness and length. For initial validation studies, a wire model of known length and gauge was used as a calibration sample. The use of the transparent gel- pack media, together with the gel tomography software, allows the plant biologist a method for non-destructive visualization and measurement of root structure that has previously been unattainable.

  13. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  14. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  15. Contactless operating table control based on 3D image processing.

    PubMed

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  16. Contactless operating table control based on 3D image processing.

    PubMed

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct. PMID:25569978

  17. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  18. 3D image analysis of a volcanic deposit

    NASA Astrophysics Data System (ADS)

    de Witte, Y.; Vlassenbroeck, J.; Vandeputte, K.; Dewanckele, J.; Cnudde, V.; van Hoorebeke, L.; Ernst, G.; Jacobs, P.

    2009-04-01

    During the last decades, X-ray micro CT has become a well established technique for non-destructive testing in a wide variety of research fields. Using a series of X-ray transmission images of the sample at different projection angles, a stack of 2D cross-sections is reconstructed, resulting in a 3D volume representing the X-ray attenuation coefficients of the sample. Since the attenuation coefficient of a material depends on its density and atomic number, this volume provides valuable information about the internal structure and composition of the sample. Although much qualitative information can be derived directly from this 3D volume, researchers usually require more quantitative results to be able to provide a full characterization of the sample under investigation. This type of information needs to be retrieved using specialized image processing software. For most samples, it is imperative that this processing is performed on the 3D volume as a whole, since a sequence of 2D cross sections usually forms an inadequate approximation of the actual structure. The complete processing of a volume consists of three sequential steps. First, the volume is segmented into a set of objects. What these objects represent depends on what property of the sample needs to be analysed. The objects can be for instance concavities, dense inclusions or the matrix of the sample. When dealing with noisy data, it might be necessary to filter the data before applying the segmentation. The second step is the separation of connected objects into a set of smaller objects. This is necessary when objects appear to be connected because of the limited resolution and contrast of the scan. Separation can also be useful when the sample contains a network structure and one wants to study the individual cells of the network. The third and last step consists of the actual analysis of the various objects to derive the different parameters of interest. While some parameters require extensive

  19. Thermal 3D modeling system based on 3-view geometry

    NASA Astrophysics Data System (ADS)

    Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-11-01

    In this paper, we propose a novel thermal three-dimensional (3D) modeling system that includes 3D shape, visual, and thermal infrared information and solves a registration problem among these three types of information. The proposed system consists of a projector, a visual camera and, a thermal camera (PVT). To generate 3D shape information, we use a structured light technique, which consists of a visual camera and a projector. A thermal camera is added to the structured light system in order to provide thermal information. To solve the correspondence problem between the three sensors, we use three-view geometry. Finally, we obtain registered PVT data, which includes visual, thermal, and 3D shape information. Among various potential applications such as industrial measurements, biological experiments, military usage, and so on, we have adapted the proposed method to biometrics, particularly for face recognition. With the proposed method, we obtain multi-modal 3D face data that includes not only textural information but also data regarding head pose, 3D shape, and thermal information. Experimental results show that the performance of the proposed face recognition system is not limited by head pose variation which is a serious problem in face recognition.

  20. Light sheet adaptive optics microscope for 3D live imaging

    NASA Astrophysics Data System (ADS)

    Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

    2013-02-01

    We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

  1. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively. PMID:19269094

  2. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  3. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  4. OPTIMIZATION OF 3-D IMAGE-GUIDED NEAR INFRARED SPECTROSCOPY USING BOUNDARY ELEMENT METHOD

    PubMed Central

    Srinivasan, Subhadra; Carpenter, Colin; Pogue, Brian W.; Paulsen, Keith D.

    2010-01-01

    Multimodality imaging systems combining optical techniques with MRI/CT provide high-resolution functional characterization of tissue by imaging molecular and vascular biomarkers. To optimize these hybrid systems for clinical use, faster and automatable algorithms are required for 3-D imaging. Towards this end, a boundary element model was used to incorporate tissue boundaries from MRI/CT into image formation process. This method uses surface rendering to describe light propagation in 3-D using diffusion equation. Parallel computing provided speedup of up to 54% in time of computation. Simulations showed that location of NIRS probe was crucial for quantitatively accurate estimation of tumor response. A change of up to 61% was seen between cycles 1 and 3 in monitoring tissue response to neoadjuvant chemotherapy. PMID:20523751

  5. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  6. Deformable M-Reps for 3D Medical Image Segmentation

    PubMed Central

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID

  7. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  8. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  9. Range walk error correction using prior modeling in photon counting 3D imaging lidar

    NASA Astrophysics Data System (ADS)

    He, Weiji; Chen, Yunfei; Miao, Zhuang; Chen, Qian; Gu, Guohua; Dai, Huidong

    2013-09-01

    A real-time correction method for range walk error in photon counting 3D imaging Lidar is proposed in this paper. We establish the photon detection model and pulse output delay model for GmAPD, which indicates that range walk error in photon counting 3D imaging Lidar is mainly effected by the number of photons during laser echo pulse. A measurable variable - laser pulse response rate is defined as a substitute of the number of photons during laser echo pulse, and the expression of the range walk error with respect to the laser pulse response rate is obtained using priori calibration. By recording photon arrival time distribution, the measurement error of unknown targets is predicted using established range walk error function and the range walk error compensated image is got. Thus real-time correction of the measurement error in photon counting 3D imaging Lidar is implemented. The experimental results show that the range walks error caused by the difference in reflected energy of the target can be effectively avoided without increasing the complexity of photon counting 3D imaging Lidar system.

  10. Optical 3D-coordinate measuring system using structured light

    NASA Astrophysics Data System (ADS)

    Schreiber, Wolfgang; Notni, Gunther; Kuehmstedt, Peter; Gerber, Joerg; Kowarschik, Richard M.

    1996-09-01

    The paper is aimed at the description of an optical shape measuring technique based on a consistent principle using fringe projection technique. We demonstrate a real 3D- coordinate measuring system where the sale of coordinates is given only by the illumination-structures. This method has the advantages that the aberration of the observing system and the depth-dependent imaging scale have no influence on the measuring accuracy and, moreover, the measurements are independent of the position of the camera with respect to the object under test. Furthermore, it is shown that the influence of specular effects of the surface on the measuring result can be eliminated. Moreover, we developed a very simple algorithm to calibrate the measuring system. The measuring examples show that a measuring accuracy of 10-4 (i.e. 10 micrometers ) within an object volume of 100 X 100 X 70 mm3 is achievable. Furthermore, it is demonstrated that the set of coordinate values can be processed in CNC- and CAD-systems.

  11. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  12. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  13. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    PubMed

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  14. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  15. Computational-optical microscopy for 3D biological imaging beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Grover, Ginni

    In recent years, super-resolution imaging has become an important fluorescent microscopy tool. It has enabled imaging of structures smaller than the optical diffraction limit with resolution less than 50 nm. Extension to high-resolution volume imaging has been achieved by integration with various optical techniques. In this thesis, development of a fluorescent microscope to enable high resolution, extended depth, three dimensional (3D) imaging is discussed; which is achieved by integration of computational methods with optical systems. In the first part of the thesis, point spread function (PSF) engineering for volume imaging is discussed. A class of PSFs, referred to as double-helix (DH) PSFs, is generated. The PSFs exhibit two focused spots in the image plane which rotate about the optical axis, encoding depth in rotation of the image. These PSFs extend the depth-of-field up to a factor of ˜5. Precision performance of the DH-PSFs, based on an information theoretical analysis, is compared with other 3D methods with conclusion that the DH-PSFs provide the best precision and the longest depth-of-field. Out of various possible DH-PSFs, a suitable PSF is obtained for super-resolution microscopy. The DH-PSFs are implemented in imaging systems, such as a microscope, with a special phase modulation at the pupil plane. Surface-relief elements which are polarization-insensitive and ˜90% light efficient are developed for phase modulation. The photon-efficient DH-PSF microscopes thus developed are used, along with optimal position estimation algorithms, for tracking and super-resolution imaging in 3D. Imaging at depths-of-field of up to 2.5 microm is achieved without focus scanning. Microtubules were imaged with 3D resolution of (6, 9, 39) nm, which is in close agreement with the theoretical limit. A quantitative study of co-localization of two proteins in volume was conducted in live bacteria. In the last part of the thesis practical aspects of the DH-PSF microscope are

  16. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  17. Characterization of 3D printing output using an optical sensing system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents the experimental design and initial testing of a system to characterize the progress and performance of a 3D printer. The system is based on five Raspberry Pi single-board computers. It collects images of the 3D printed object, which are compared to an ideal model. The system, while suitable for printers of all sizes, can potentially be produced at a sufficiently low cost to allow its incorporation into consumer-grade printers. The efficacy and accuracy of this system is presented and discussed. The paper concludes with a discussion of the benefits of being able to characterize 3D printer performance.

  18. 3D image fusion and guidance for computer-assisted bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, W. E.; Rai, L.; Merritt, S. A.; Lu, K.; Linger, N. T.; Yu, K. C.

    2005-11-01

    The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.

  19. A navigation system for flexible endoscopes using abdominal 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Kaar, M.; Bathia, Amon; Bathia, Amar; Lampret, A.; Birkfellner, W.; Hummel, J.; Figl, M.

    2014-09-01

    A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is used for image fusion of the pre-interventional computed tomography (CT) to the endoscopic US. A 3D-US scan, tracked with an optical tracking system (OTS), is taken pre-operatively together with the CT scan. The CT is calibrated using the OTS, providing the transformation from CT to 3D-US. Immediately before intervention a 3D-US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative 3D-US. The endoscopic US is calibrated using the EMTS and registered to the pre-operative CT by an intra-modal 3D-US/3D-US registration. Phantom studies showed a registration error for the US to CT registration of 5.1 mm ± 2.8 mm. 3D-US/3D-US registration of patient data gave an error of 4.1 mm compared to 2.8 mm with the phantom. From this we estimate an error on patient experiments of 5.6 mm.

  20. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  1. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  2. Dual array 3D electron cyclotron emission imaging at ASDEX Upgrade

    SciTech Connect

    Classen, I. G. J. Bogomolov, A. V.; Domier, C. W.; Luhmann, N. C.; Suttrop, W.; Boom, J. E.; Tobias, B. J.; Donné, A. J. H.

    2014-11-15

    In a major upgrade, the (2D) electron cyclotron emission imaging diagnostic (ECEI) at ASDEX Upgrade has been equipped with a second detector array, observing a different toroidal position in the plasma, to enable quasi-3D measurements of the electron temperature. The new system will measure a total of 288 channels, in two 2D arrays, toroidally separated by 40 cm. The two detector arrays observe the plasma through the same vacuum window, both under a slight toroidal angle. The majority of the field lines are observed by both arrays simultaneously, thereby enabling a direct measurement of the 3D properties of plasma instabilities like edge localized mode filaments.

  3. Dual array 3D electron cyclotron emission imaging at ASDEX Upgrade.

    PubMed

    Classen, I G J; Domier, C W; Luhmann, N C; Bogomolov, A V; Suttrop, W; Boom, J E; Tobias, B J; Donné, A J H

    2014-11-01

    In a major upgrade, the (2D) electron cyclotron emission imaging diagnostic (ECEI) at ASDEX Upgrade has been equipped with a second detector array, observing a different toroidal position in the plasma, to enable quasi-3D measurements of the electron temperature. The new system will measure a total of 288 channels, in two 2D arrays, toroidally separated by 40 cm. The two detector arrays observe the plasma through the same vacuum window, both under a slight toroidal angle. The majority of the field lines are observed by both arrays simultaneously, thereby enabling a direct measurement of the 3D properties of plasma instabilities like edge localized mode filaments. PMID:25430246

  4. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  5. Systems biology in 3D space--enter the morphome.

    PubMed

    Lucocq, John M; Mayhew, Terry M; Schwab, Yannick; Steyer, Anna M; Hacker, Christian

    2015-02-01

    Systems-based understanding of living organisms depends on acquiring huge datasets from arrays of genes, transcripts, proteins, and lipids. These data, referred to as 'omes', are assembled using 'omics' methodologies. Currently a comprehensive, quantitative view of cellular and organellar systems in 3D space at nanoscale/molecular resolution is missing. We introduce here the term 'morphome' for the distribution of living matter within a 3D biological system, and 'morphomics' for methods of collecting 3D data systematically and quantitatively. A sampling-based approach termed stereology currently provides rapid, precise, and minimally biased morphomics. We propose that stereology solves the 'big data' problem posed by emerging wide-scale electron microscopy (EM) and can establish quantitative links between the newer nanoimaging platforms such as electron tomography, cryo-EM, and correlative microscopy.

  6. An efficient way of high-contrast, quasi-3D cellular imaging: off-axis illumination.

    PubMed

    Hostounský, Zdenĕk; Pelc, Radek

    2006-07-31

    An imaging system enabling a convenient visualisation of cells and other small objects is presented. It represents an adaptation of the optical microscope condenser, accommodating a built-in edge (relief) diaphragm brought close to the condenser iris diaphragm and enabling high-contrast pseudo-relief (quasi-3D) imaging. The device broadens the family of available apparatus based on the off-axis (or anaxial, asymmetric, inclined, oblique, schlieren-type, sideband) illumination. The simplicity of the design makes the condenser a user-friendly, dedicated device delivering high-contrast quasi-3D images of phase objects. Those are nearly invisible under the ordinary (axial) illumination. The phase contrast microscopy commonly used in visualisation of phase objects does not deliver the quasi-3D effect and introduces a disturbing 'halo' effect around the edges. The performance of the device presented here is demonstrated on living cells and tissue replicas. High-contrast quasi-3D images of cell-free preparations of biological origin (paper fibres and microcrystals) are shown as well. PMID:16678908

  7. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chun