Science.gov

Sample records for 3d imaging systems

  1. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  2. 3D imaging system for biometric applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

    2010-04-01

    There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

  3. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  4. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  5. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  6. Gastric Contraction Imaging System Using a 3-D Endoscope.

    PubMed

    Yoshimoto, Kayo; Yamada, Kenji; Watabe, Kenji; Takeda, Maki; Nishimura, Takahiro; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2014-01-01

    This paper presents a gastric contraction imaging system for assessment of gastric motility using a 3-D endoscope. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. One of the major factors for these diseases is abnormal gastrointestinal motility. For assessment of gastric motility, a gastric motility imaging system is needed. To assess the dynamic motility of the stomach, the proposed system measures 3-D gastric contractions derived from a 3-D profile of the stomach wall obtained with a developed 3-D endoscope. After obtaining contraction waves, their frequency, amplitude, and speed of propagation can be calculated using a Gaussian function. The proposed system was evaluated for 3-D measurements of several objects with known geometries. The results showed that the surface profiles could be obtained with an error of [Formula: see text] of the distance between two different points on images. Subsequently, we evaluated the validity of a prototype system using a wave simulated model. In the experiment, the amplitude and position of waves could be measured with 1-mm accuracy. The present results suggest that the proposed system can measure the speed and amplitude of contractions. This system has low invasiveness and can assess the motility of the stomach wall directly in a 3-D manner. Our method can be used for examination of gastric morphological and functional abnormalities.

  7. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  8. Proposed traceable structural resolution protocols for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François

    2009-08-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.

  9. Digital acquisition system for high-speed 3-D imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji

    1997-11-01

    High-speed digital three-dimensional (3-D) imagery is possible using multiple independent charge-coupled device (CCD) cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras, providing versatility in configuration and image acquisition. By aligning the cameras in nearly coincident lines-of-sight, a sixteen frame two-dimensional (2-D) sequence can be captured. The delays can be individually adjusted lo yield a greater number of acquired frames during the more rapid segments of the event. Additionally, individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. An alternative alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images may be captured. In the first alignment scheme the camera lines-of-sight cannot be made precisely coincident. Thus representation of the data as a monocular sequence introduces the issue of independent camera coordinate registration with the real scene. This issue arises more significantly using the stereo pair method to reconstruct quantitative 3-D spatial information of the event as a function of time. The principal development here will be the derivation and evaluation of a solution transform and its inverse for the digital data which will yield a 3-D spatial mapping as a function of time.

  10. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  11. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  12. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  13. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  14. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  15. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  16. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  17. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice.

  18. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  19. High definition 3D imaging lidar system using CCD

    NASA Astrophysics Data System (ADS)

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  20. A 3-D fluorescence imaging system incorporating structured illumination technology

    NASA Astrophysics Data System (ADS)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  1. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  2. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  3. Air-touch interaction system for integral imaging 3D display

    NASA Astrophysics Data System (ADS)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  4. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  5. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  6. IMPROMPTU: a system for automatic 3D medical image-analysis.

    PubMed

    Sundaramoorthy, G; Hoford, J D; Hoffman, E A; Higgins, W E

    1995-01-01

    The utility of three-dimensional (3D) medical imaging is hampered by difficulties in extracting anatomical regions and making measurements in 3D images. Presently, a user is generally forced to use time-consuming, subjective, manual methods, such as slice tracing and region painting, to define regions of interest. Automatic image-analysis methods can ameliorate the difficulties of manual methods. This paper describes a graphical user interface (GUI) system for constructing automatic image-analysis processes for 3D medical-imaging applications. The system, referred to as IMPROMPTU, provides a user-friendly environment for prototyping, testing and executing complex image-analysis processes. IMPROMPTU can stand alone or it can interact with an existing graphics-based 3D medical image-analysis package (VIDA), giving a strong environment for 3D image-analysis, consisting of tools for visualization, manual interaction, and automatic processing. IMPROMPTU links to a large library of 1D, 2D, and 3D image-processing functions, referred to as VIPLIB, but a user can easily link in custom-made functions. 3D applications of the system are given for left-ventricular chamber, myocardial, and upper-airway extractions.

  7. An efficient calibration method for freehand 3-D ultrasound imaging systems.

    PubMed

    Leotta, Daniel F

    2004-07-01

    A phantom has been developed to quickly calibrate a freehand 3-D ultrasound (US) imaging system. Calibration defines the spatial relationship between the US image plane and an external tracking device attached to the scanhead. The phantom consists of a planar array of strings and beads, and a set of out-of-plane strings that guide the user to proper scanhead orientation for imaging. When an US image plane is coincident with the plane defined by the strings, the calibration parameters are calculated by matching of homologous points in the image and phantom. The resulting precision and accuracy of the 3-D imaging system are similar to those achieved with a more complex calibration procedure. The 3-D reconstruction performance of the calibrated system is demonstrated with a magnetic tracking system, but the method could be applied to other tracking devices.

  8. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  9. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  10. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  11. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  12. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  13. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  14. Development of a 3-D Measuring System for Upper Limb Movements Using Image Processing

    NASA Astrophysics Data System (ADS)

    Ogata, Kohichi; Toume, Tadashi; Nakanishi, Ryoji

    This paper describes a 3-D motion capture system for the quantitative evaluation of a finger-nose test using image processing. In the field of clinical medicine, qualitative and quantitative evaluation of voluntary movements is necessary for correct diagnosis of disorders. For this purpose, we have developed a 3-D measuring system with a multi-camera system. The configuration of the system is described and examples of movement data are shown for normal subjects and patients. In the finger-nose test at a fast trial speed, a discriminant analysis using Maharanobis generalized distances shows a discriminant rate of 93% between normal subjects and spinocerebellar degeneration(SCD) patients.

  15. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    SciTech Connect

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  16. Development of a 3D Digital Particle Image Thermometry and Velocimetry (3DDPITV) System

    NASA Astrophysics Data System (ADS)

    Schmitt, David; Rixon, Greg; Dabiri, Dana

    2006-11-01

    A novel 3D Digital Particle Image Thermometry and Velocimetry (3DDPITV) system has been designed and fabricated. By combining 3D Digital Particle Image Velocimetry (3DDPIV) and Digital Particle Image Thermometry (DPIT) into one system, this technique provides simultaneous temperature and velocity data in a volume of ˜1x1x0.5 in^3 using temperature sensitive liquid crystal particles as flow sensors. Two high-intensity xenon flashlamps were used as illumination sources. The imaging system consists of six CCD cameras, three allocated for measuring velocity, based on particle motion, and three for measuring temperature, based on particle color. The cameras were optically aligned using a precision grid and high-resolution translation stages. Temperature calibration was then performed using a precision thermometer and a temperature-controlled bath. Results from proof-of-concept experiments will be presented and discussed.

  17. Structured light 3D tracking system for measuring motions in PET brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Jørgensen, Morten R.; Paulsen, Rasmus R.; Højgaard, Liselotte; Roed, Bjarne; Larsen, Rasmus

    2010-02-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated tracking system for measuring head motions in PET imaging.

  18. Structured light imaging system for structural and optical characterization of 3D tissue-simulating phantoms

    NASA Astrophysics Data System (ADS)

    Liu, Songde; Smith, Zach; Xu, Ronald X.

    2016-10-01

    There is a pressing need for a phantom standard to calibrate medical optical devices. However, 3D printing of tissue-simulating phantom standard is challenged by lacking of appropriate methods to characterize and reproduce surface topography and optical properties accurately. We have developed a structured light imaging system to characterize surface topography and optical properties (absorption coefficient and reduced scattering coefficient) of 3D tissue-simulating phantoms. The system consisted of a hyperspectral light source, a digital light projector (DLP), a CMOS camera, two polarizers, a rotational stage, a translation stage, a motion controller, and a personal computer. Tissue-simulating phantoms with different structural and optical properties were characterized by the proposed imaging system and validated by a standard integrating sphere system. The experimental results showed that the proposed system was able to achieve pixel-level optical properties with a percentage error of less than 11% for absorption coefficient and less than 7% for reduced scattering coefficient for phantoms without surface curvature. In the meanwhile, 3D topographic profile of the phantom can be effectively reconstructed with an accuracy of less than 1% deviation error. Our study demonstrated that the proposed structured light imaging system has the potential to characterize structural profile and optical properties of 3D tissue-simulating phantoms.

  19. HIPERCIR: a low-cost high-performance 3D radiology image analysis system

    NASA Astrophysics Data System (ADS)

    Blanquer, Ignacio; Hernandez, Vincente; Ramirez, Javier; Vidal, Antonio M.; Alcaniz-Raya, Mariano L.; Grau Colomer, Vincente; Monserrat, Carlos A.; Concepcion, Luis; Marti-Bonmati, Luis

    1999-07-01

    Clinics have to deal currently with hundreds of 3D images a day. The processing and visualization using currently affordable systems is very costly and slow. The present work shows the features of a software integrated parallel computing package developed at the Universidad Politecnica de Valencia (UPV), under the European Project HIPERCIR, which is aimed at reducing the time and requirements for processing and visualizing the 3D images with low-cost solutions, such as networks of PCs running standard operating systems. HIPERCIR is targeted to Radiology Departments of Hospitals and Radiology System Providers to provide them with a tool for easing the day-to-day diagnosis. This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Computing Systems Department of the UPV, experts on biomedical software and radiology and tomography clinic experts.

  20. Multispectral photon counting integral imaging system for color visualization of photon limited 3D scenes

    NASA Astrophysics Data System (ADS)

    Moon, Inkyu

    2014-06-01

    This paper provides an overview of a colorful photon-counting integral imaging system using Bayer elemental images for 3D visualization of photon limited scenes. The color image sensor with a format of Bayer color filter array, i.e., a red, a green, or a blue filter in a repeating pattern, captures elemental image set of a photon limited three-dimensional (3D) scene. It is assumed that the observed photon count in each channel (red, green or blue) follows Poisson statistics. The reconstruction of 3D scene with a format of Bayer is obtained by applying computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator to the photon-limited Bayer elemental images. Finally, several standard demosaicing algorithms are applied in order to convert the 3D reconstruction with a Bayer format into a RGB per pixel format. Experimental results demonstrate that the gradient corrected linear interpolation technique achieves better performance in regard with acceptable PSNR and less computational complexity.

  1. Analysis of a 3-D system function measured for magnetic particle imaging.

    PubMed

    Rahmer, Jürgen; Weizenecker, Jürgen; Gleich, Bernhard; Borgert, Jörn

    2012-06-01

    Magnetic particle imaging (MPI) is a new tomographic imaging approach that can quantitatively map magnetic nanoparticle distributions in vivo. It is capable of volumetric real-time imaging at particle concentrations low enough to enable clinical applications. For image reconstruction in 3-D MPI, a system function (SF) is used, which describes the relation between the acquired MPI signal and the spatial origin of the signal. The SF depends on the instrumental configuration, the applied field sequence, and the magnetic particle characteristics. Its properties reflect the quality of the spatial encoding process. This work presents a detailed analysis of a measured SF to give experimental evidence that 3-D MPI encodes information using a set of 3-D spatial patterns or basis functions that is stored in the SF. This resembles filling 3-D k-space in magnetic resonance imaging, but is faster since all information is gathered simultaneously over a broad acquisition bandwidth. A frequency domain analysis shows that the finest structures that can be encoded with the presented SF are as small as 0.6 mm. SF simulations are performed to demonstrate that larger particle cores extend the set of basis functions towards higher resolution and that the experimentally observed spatial patterns require the existence of particles with core sizes of about 30 nm in the calibration sample. A simple formula is presented that qualitatively describes the basis functions to be expected at a certain frequency.

  2. In Vivo Validation of a 3-D Ultrasound System for Imaging the Lateral Ventricles of Neonates.

    PubMed

    Kishimoto, Jessica; Fenster, Aaron; Lee, David S C; de Ribaupierre, Sandrine

    2016-04-01

    Intra-ventricular hemorrhage, with the resultant cerebral ventricle dilation, is a common cause of brain injury in preterm neonates. Clinically, monitoring is performed using 2-D ultrasound (US); however, its clinical utility in dilation is limited because it cannot provide accurate measurements of irregular volumes such as those of the ventricles, and this might delay treatment until the patient's condition deteriorates severely. We have developed a 3-D US system to image the lateral ventricles of neonates within the confines of incubators. We describe an in vivo ventricle volume validation study in two parts: (i) comparisons between ventricle volumes derived from 3-D US and magnetic resonance images obtained within 24 h; and (ii) the difference between 3-D US ventricle volumes before and after clinically necessary interventions (ventricle taps), which remove cerebral spinal fluid. Magnetic resonance imaging ventricle volumes were found to be 13% greater than 3-D US ventricle volumes; however, we observed high correlations (R(2) = 0.99) when comparing the two modalities. Differences in ventricle volume pre- and post-intervention compared with the reported volume of cerebrospinal fluid removed also were highly correlated (R(2) = 0.93); the slope was not found to be statistically significantly different from 1 (p < 0.05), and the y-intercept was not found to be statistically different from 0 (p < 0.05). Comparison between 3-D US images can detect the volume change after neonatal intra-ventricular hemorrhage. This could be used to determine which patients will have progressive ventricle dilation and allow for more timely surgical interventions. However, 3-D US ventricle volumes should not be directly compared with magnetic resonance imaging ventricle volumes.

  3. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay.

    PubMed

    Liao, Hongen; Ishihara, Hirotaka; Tran, Huy Hoang; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi

    2010-01-01

    This paper describes a precision-guided surgical navigation system for minimally invasive surgery. The system combines a laser guidance technique with a three-dimensional (3D) autostereoscopic image overlay technique. Images of surgical anatomic structures superimposed onto the patient are created by employing an animated imaging method called integral videography (IV), which can display geometrically accurate 3D autostereoscopic images and reproduce motion parallax without the need for special viewing or tracking devices. To improve the placement accuracy of surgical instruments, we integrated an image overlay system with a laser guidance system for alignment of the surgical instrument and better visualization of patient's internal structure. We fabricated a laser guidance device and mounted it on an IV image overlay device. Experimental evaluations showed that the system could guide a linear surgical instrument toward a target with an average error of 2.48 mm and standard deviation of 1.76 mm. Further improvement to the design of the laser guidance device and the patient-image registration procedure of the IV image overlay will make this system practical; its use would increase surgical accuracy and reduce invasiveness.

  4. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  5. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with μ-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  6. Intra-operative 3D imaging system for robot-assisted fracture manipulation.

    PubMed

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-01-01

    Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).

  7. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  8. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  9. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  10. 3-D ultrasonic strain imaging based on a linear scanning system.

    PubMed

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  11. A preliminary evaluation work on a 3D ultrasound imaging system for 2D array transducer

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaoli; Li, Xu; Yang, Jiali; Li, Chunyu; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a preliminary evaluation work on a pre-designed 3-D ultrasound imaging system. The system mainly consists of four parts, a 7.5MHz, 24×24 2-D array transducer, the transmit/receive circuit, power supply, data acquisition and real-time imaging module. The row-column addressing scheme is adopted for the transducer fabrication, which greatly reduces the number of active channels . The element area of the transducer is 4.6mm by 4.6mm. Four kinds of tests were carried out to evaluate the imaging performance, including the penetration depth range, axial and lateral resolution, positioning accuracy and 3-D imaging frame rate. Several strong reflection metal objects , fixed in a water tank, were selected for the purpose of imaging due to a low signal-to-noise ratio of the transducer. The distance between the transducer and the tested objects , the thickness of aluminum, and the seam width of the aluminum sheet were measured by a calibrated micrometer to evaluate the penetration depth, the axial and lateral resolution, respectively. The experiment al results showed that the imaging penetration depth range was from 1.0cm to 6.2cm, the axial and lateral resolution were 0.32mm and 1.37mm respectively, the imaging speed was up to 27 frames per second and the positioning accuracy was 9.2%.

  12. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  13. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  14. Imaging the behavior of molecules in biological systems: breaking the 3D speed barrier with 3D multi-resolution microscopy.

    PubMed

    Welsher, Kevin; Yang, Haw

    2015-01-01

    The overwhelming effort in the development of new microscopy methods has been focused on increasing the spatial and temporal resolution in all three dimensions to enable the measurement of the molecular scale phenomena at the heart of biological processes. However, there exists a significant speed barrier to existing 3D imaging methods, which is associated with the overhead required to image large volumes. This overhead can be overcome to provide nearly unlimited temporal precision by simply focusing on a single molecule or particle via real-time 3D single-particle tracking and the newly developed 3D Multi-resolution Microscopy (3D-MM). Here, we investigate the optical and mechanical limits of real-time 3D single-particle tracking in the context of other methods. In particular, we investigate the use of an optical cantilever for position sensitive detection, finding that this method yields system magnifications of over 3000×. We also investigate the ideal PID control parameters and their effect on the power spectrum of simulated trajectories. Taken together, these data suggest that the speed limit in real-time 3D single particle-tracking is a result of slow piezoelectric stage response as opposed to optical sensitivity or PID control.

  15. A small animal image guided irradiation system study using 3D dosimeters

    NASA Astrophysics Data System (ADS)

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  16. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time.

  17. Noise analysis for near field 3-D FM-CW radar imaging systems

    SciTech Connect

    Sheen, David M.

    2015-06-19

    Near field radar imaging systems are used for several applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit the performance in several ways including reduction in system sensitivity and reduction of image dynamic range. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of these noise sources on a fast-chirping FM-CW system.

  18. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  19. Novel metrics and methodology for the characterisation of 3D imaging systems

    NASA Astrophysics Data System (ADS)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Lohse, Niels; Jackson, Michael R.

    2017-04-01

    The modelling, benchmarking and selection process for non-contact 3D imaging systems relies on the ability to characterise their performance. Characterisation methods that require optically compliant artefacts such as matt white spheres or planes, fail to reveal the performance limitations of a 3D sensor as would be encountered when measuring a real world object with problematic surface finish. This paper reports a method of evaluating the performance of 3D imaging systems on surfaces of arbitrary isotropic surface finish, position and orientation. The method involves capturing point clouds from a set of samples in a range of surface orientations and distances from the sensor. Point clouds are processed to create a single performance chart per surface finish, which shows both if a point is likely to be recovered, and the expected point noise as a function of surface orientation and distance from the sensor. In this paper, the method is demonstrated by utilising a low cost pan-tilt table and an active stereo 3D camera. Its performance is characterised by the fraction and quality of recovered data points on aluminium isotropic surfaces ranging in roughness average (Ra) from 0.09 to 0.46 μm at angles of up to 55° relative to the sensor over a distances from 400 to 800 mm to the scanner. Results from a matt white surface similar to those used in previous characterisation methods contrast drastically with results from even the dullest aluminium sample tested, demonstrating the need to characterise sensors by their limitations, not just best case performance.

  20. Complete calibration of a phase-based 3D imaging system based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Meng, Shasha; Ma, Haiyan; Zhang, Zonghua; Guo, Tong; Zhang, Sixiang; Hu, Xiaotang

    2011-11-01

    Phase calculation-based 3D imaging systems have been widely studied because of the advantages of non-contact operation, full-field, fast acquisition and automatic data processing. A vital step is calibration, which builds up the relationship between phase map and range image. The existing calibration methods are complicated because of using a precise translating stage or a 3D gauge block. Recently, we presented a simple method to convert phase into depth data by using a polynomial function and a plate having discrete markers on the surface with known distance in between. However, the initial position of all the markers needs to be determined manually and the X, Y coordinates are not calibrated. This paper presents a complete calibration method of phase calculation-based 3D imaging systems by using a plate having discrete markers on the surface with known distance in between. The absolute phase of each pixel can be calculated by projecting fringe pattern onto the plate. Each marker position can be determined by an automatic extraction algorithm, so the relative depth of each pixel to a chosen reference plane can be obtained. Therefore, coefficient set of the polynomial function for each pixel are determined by using the obtained absolute phase and depth data. In the meanwhile, pixel positions and the X, Y coordinates can be established by the parameters of the CCD camera. Experimental results and performance evaluation show that the proposed calibration method can easily build up the relationship between absolute phase map and range image in a simple way.

  1. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  2. Evaluation of precision and accuracy assessment of different 3-D surface imaging systems for biomedical purposes.

    PubMed

    Eder, Maximilian; Brockmann, Gernot; Zimmermann, Alexander; Papadopoulos, Moschos A; Schwenzer-Zimmerer, Katja; Zeilhofer, Hans Florian; Sader, Robert; Papadopulos, Nikolaos A; Kovacs, Laszlo

    2013-04-01

    Three-dimensional (3-D) surface imaging has gained clinical acceptance, especially in the field of cranio-maxillo-facial and plastic, reconstructive, and aesthetic surgery. Six scanners based on different scanning principles (Minolta Vivid 910®, Polhemus FastSCAN™, GFM PRIMOS®, GFM TopoCAM®, Steinbichler Comet® Vario Zoom 250, 3dMD DSP 400®) were used to measure five sheep skulls of different sizes. In three areas with varying anatomical complexity (areas, 1 = high; 2 = moderate; 3 = low), 56 distances between 20 landmarks are defined on each skull. Manual measurement (MM), coordinate machine measurements (CMM) and computer tomography (CT) measurements were used to define a reference method for further precision and accuracy evaluation of different 3-D scanning systems. MM showed high correlation to CMM and CT measurements (both r = 0.987; p < 0.001) and served as the reference method. TopoCAM®, Comet® and Vivid 910® showed highest measurement precision over all areas of complexity; Vivid 910®, the Comet® and the DSP 400® demonstrated highest accuracy over all areas with Vivid 910® being most accurate in areas 1 and 3, and the DSP 400® most accurate in area 2. In accordance to the measured distance length, most 3-D devices present higher measurement precision and accuracy for large distances and lower degrees of precision and accuracy for short distances. In general, higher degrees of complexity are associated with lower 3-D assessment accuracy, suggesting that for optimal results, different types of scanners should be applied to specific clinical applications and medical problems according to their special construction designs and characteristics.

  3. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR

    SciTech Connect

    Yun, G. S. Choi, M. J.; Lee, J.; Kim, M.; Leem, J.; Nam, Y.; Choe, G. H.; Lee, W.; Park, H. K.; Park, H.; Woo, D. S.; Kim, K. W.; Domier, C. W.; Luhmann, N. C.; Ito, N.; Mase, A.; Lee, S. G.

    2014-11-15

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B{sub 0} = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  4. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR.

    PubMed

    Yun, G S; Lee, W; Choi, M J; Lee, J; Kim, M; Leem, J; Nam, Y; Choe, G H; Park, H K; Park, H; Woo, D S; Kim, K W; Domier, C W; Luhmann, N C; Ito, N; Mase, A; Lee, S G

    2014-11-01

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B0 = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  5. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  6. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  7. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  8. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  9. Development of a 3-D data acquisition system for human facial imaging

    NASA Astrophysics Data System (ADS)

    Marshall, Stephen J.; Rixon, R. C.; Whiteford, Don N.; Wells, Peter J.; Powell, S. J.

    1990-07-01

    While preparing to conduct human facial surgery, it is necessary to visualise the effects of proposed surgery on the patient's appearance. This visualisation is of great benefit to both surgeon and patient, and has traditionally been achieved by the manual manipulation of photographs. Technological developments in the areas of computer-aided design and optical sensing now make it possible to construct a computer-based imaging system which can simulate the effects of facial surgery on patients. A collaborative project with the aim of constructing a prototype facial imaging system is under way between the National Engineering Laboratory and St George's Hospital. The proposed system will acquire, display and manipulate 3-dimensional facial images of patients requiring facial surgery. The feasibility of using two NEL developed optical measurement methods for 3-D facial data acquisition had been established by their successful application to the measurement of dummy heads. The two optical measurement systems, the NEL Auto-MATE moire fringe contouring system and the NEL STRIPE laser scanning triangulation system, were further developed to adapt them for use in facial imaging and additional tests carried out in which emphasis was placed on the use of live human subjects. The knowledge gained in the execution of the tests enabled the selection of the most suitable of the two methods studied for facial data acquisition. A full description of the methods and equipment used in the study will be given. Additionally, work on the effects of the quality and quantity of measurement data on the facial image will be described. Finally, the question of how best to provide display and manipulation of the facial images will be addressed.

  10. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  11. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  12. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  13. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  14. Singular value decomposition analysis of a photoacoustic imaging system and 3D imaging at 0.7 FPS.

    PubMed

    Roumeliotis, Michael B; Stodilka, Robert Z; Anastasio, Mark A; Ng, Eldon; Carson, Jeffrey J L

    2011-07-04

    Photoacoustic imaging is a non-ionizing imaging modality that provides contrast consistent with optical imaging techniques while the resolution and penetration depth is similar to ultrasound techniques. In a previous publication [Opt. Express 18, 11406 (2010)], a technique was introduced to experimentally acquire the imaging operator for a photoacoustic imaging system. While this was an important foundation for future work, we have recently improved the experimental procedure allowing for a more densely populated imaging operator to be acquired. Subsets of the imaging operator were produced by varying the transducer count as well as the measurement space temporal sampling rate. Examination of the matrix rank and the effect of contributing object space singular vectors to image reconstruction were performed. For a PAI system collecting only limited data projections, matrix rank increased linearly with transducer count and measurement space temporal sampling rate. Image reconstruction using a regularized pseudoinverse of the imaging operator was performed on photoacoustic signals from a point source, line source, and an array of point sources derived from the imaging operator. As expected, image quality increased for each object with increasing transducer count and measurement space temporal sampling rate. Using the same approach, but on experimentally sampled photoacoustic signals from a moving point-like source, acquisition, data transfer, reconstruction and image display took 1.4 s using one laser pulse per 3D frame. With relatively simple hardware improvements to data transfer and computation speed, our current imaging results imply that acquisition and display of 3D photoacoustic images at laser repetition rates of 10Hz is easily achieved.

  15. 3D display and image processing system for metal bellows welding

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Son, Jung-Young

    2010-04-01

    Industrial welded metal Bellows is in shape of flexible pipeline. The most common form of bellows is as pairs of washer-shaped discs of thin sheet metal stamped from strip stock. Performing arc welding operation may cause dangerous accidents and bad smells. Furthermore, in the process of welding operation, workers have to observe the object directly through microscope adjusting the vertical and horizontal positions of welding rod tip and the bellows fixed on the jig, respectively. Welding looking through microscope makes workers feel tired. To improve working environment that workers sit in an uncomfortable position and productivity we introduced 3D display and image processing. Main purpose of the system is not only to maximize the efficiency of industrial productivity with accuracy but also to keep the safety standards with the full automation of work by distant remote controlling.

  16. 3D-printed eagle eye: Compound microlens system for foveated imaging

    PubMed Central

    Thiele, Simon; Arzenbacher, Kathrin; Gissibl, Timo; Giessen, Harald; Herkommer, Alois M.

    2017-01-01

    We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is <200 μm. Because the four lenses are printed in one single step without the necessity for any further assembling or alignment, this approach allows for fast design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security. PMID:28246646

  17. Core Formation in Planetesimals: Textural Analyses From 3D Synchrotron Imaging and Complex Systems Modeling

    NASA Astrophysics Data System (ADS)

    Rushmer, T. A.; Tordesillas, A.; Walker, D. M.; Parkinson, D. Y.; Clark, S. M.

    2012-12-01

    Recent scenarios of core formation in planetesimals using calculations from planetary dynamists and from extinct radionuclides (e.g. 26Al, 60Fe), call for segregation of a metal liquid (core) from both solid silicate and a partially molten silicate - a silicate mush - matrix. These segregation scenarios require segregation of metallic metal along fracture networks or by the growth of molten core material into blebs large enough to overcome the strength of the mush matrix. Such segregation scenarios usually involve high strain rates so that separation can occur, which is in agreement with the accretion model of planetary growth. Experimental work has suggested deformation and shear can help develop fracture networks and coalesce metallic blebs. Here, we have developed an innovative approach that currently combines 2D textures in experimental deformation experiments on a partially molten natural meteorite with complex network analyses. 3D textural data from experimental samples, deformed at high strain rates, with or without silicate melts present, have been obtained by synchrotron-based high resolution hard x-ray microtomography imaging. A series of two-dimensional images is collected as the sample is rotated, and tomographic reconstruction yields the full 3D representation of the sample. Virtual slices through the 3D object in any arbitrary direction can be visualized, or the full data set can be visualized by volume rendering. More importantly, automated image filtering and segmentation allows the extraction of boundaries between the various phases. The volumes, shapes, and distributions of each phase, and the connectivity between them, can then be quantitatively analysed, and these results can be compared to models. We are currently using these new visual data sets to augment our 2D data. These results will be included in our current complex system analytical approach. This integrated method can elucidate and quantify the growth of metallic blebs in regions where

  18. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  19. Transition from Paris dosimetry system to 3D image-guided planning in interstitial breast brachytherapy

    PubMed Central

    Wronczewska, Anna; Kabacińska, Renata; Makarewicz, Roman

    2015-01-01

    Purpose The purpose of this study is to evaluate our first experience with 3D image-guided breast brachytherapy and to compare dose distribution parameters between Paris dosimetry system (PDS) and image-based plans. Material and methods First 49 breast cancer patients treated with 3D high-dose-rate interstitial brachytherapy as a boost were selected for the study. Every patient underwent computed tomography, and the planning target volume (PTV) and organs at risk (OAR) were outlined. Two treatment plans were created for every patient. First, based on a Paris dosimetry system (PDS), and the second one, imaged-based plan with graphical optimization (OPT). The reference isodose in PDS implants was 85%, whereas in OPT plans the isodose was chosen to obtain proper target coverage. Dose and volume parameters (D90, D100, V90, V100), doses at OARs, total reference air kerma (TRAK), and quality assurance parameters: dose nonuniformity ratio (DNR), dose homogeneity index (DHI), and conformity index (COIN) were used for a comparison of both plans. Results The mean number of catheters was 7 but the mean for 20 first patients was 5 and almost 9 for the next 29 patients. The mean value of prescribed isodose for OPT plans was 73%. The mean D90 was 88.2% and 105.8%, the D100 was 59.8% and 75.7%, the VPTV90 was 88.6% and 98.1%, the VPTV100 was 79.9% and 98.9%, and the TRAK was 0.00375 Gym–1 and 0.00439 Gym–1 for the PDS and OPT plans, respectively. The mean DNR was 0.29 and 0.42, the DHI was 0.71 and 0.58, and the COIN was 0.68 and 0.76, respectively. Conclusions The target coverage in image-guided plans (OPT) was significantly higher than in PDS plans but the dose homogeneity was worse. Also, the value of TRAK increased because of change of prescribing isodose. The learning curve slightly affected our results. PMID:26816505

  20. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  1. Gothic Churches in Paris ST Gervais et ST Protais Image Matching 3d Reconstruction to Understand the Vaults System Geometry

    NASA Astrophysics Data System (ADS)

    Capone, M.; Campi, M.; Catuogno, R.

    2015-02-01

    This paper is part of a research about ribbed vaults systems in French Gothic Cathedrals. Our goal is to compare some different gothic cathedrals to understand the complex geometry of the ribbed vaults. The survey isn't the main objective but it is the way to verify the theoretical hypotheses about geometric configuration of the flamboyant churches in Paris. The survey method's choice generally depends on the goal; in this case we had to study many churches in a short time, so we chose 3D reconstruction method based on image dense stereo matching. This method allowed us to obtain the necessary information to our study without bringing special equipment, such as the laser scanner. The goal of this paper is to test image matching 3D reconstruction method in relation to some particular study cases and to show the benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  2. Moving-Article X-Ray Imaging System and Method for 3-D Image Generation

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth R. (Inventor)

    2012-01-01

    An x-ray imaging system and method for a moving article are provided for an article moved along a linear direction of travel while the article is exposed to non-overlapping x-ray beams. A plurality of parallel linear sensor arrays are disposed in the x-ray beams after they pass through the article. More specifically, a first half of the plurality are disposed in a first of the x-ray beams while a second half of the plurality are disposed in a second of the x-ray beams. Each of the parallel linear sensor arrays is oriented perpendicular to the linear direction of travel. Each of the parallel linear sensor arrays in the first half is matched to a corresponding one of the parallel linear sensor arrays in the second half in terms of an angular position in the first of the x-ray beams and the second of the x-ray beams, respectively.

  3. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  4. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  5. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  6. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  7. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  8. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  9. 3D information from 2D images recorded in the European Modular Cultivation System on the ISS

    NASA Astrophysics Data System (ADS)

    Solheim, B. G. B.

    2009-12-01

    The European Modular Cultivation System (EMCS) on the ISS allows long-term biological experiments, e.g. on plants. Video cameras provide near real-time 2D images from these experiments. A method to obtain 3D coordinates and stereoscopic images from these 2D images has been developed and is described in this paper. The procedure was developed to enhance the data output of the MULTIGEN-1 experiment in 2007. One of the main objectives of the experiment was to study growth movements of the Arabidopsis plants and the effect of gravity on these. 3D data were important during parts of the experiment and the paper presents the method developed to acquire 3D data, the accuracy of the data, limitations to the technique and ways to improve the accuracy. Sequences of 3D data obtained from the MULTIGEN-1 experiment are used to illustrate the potential of this newfound capability of the EMCS. In the experiment setup, a positional depth accuracy of about ±0.4 mm for relative object distances and an absolute depth accuracy of about ±1.4 mm for time dependent phenomena was reached. The ability to both view biological specimens in 3D as well as obtaining quantitative 3D data added greatly to the scientific output of the MULTIGEN-1 experiment. The uses of the technique to other researchers and their experiments are discussed.

  10. The application of 3D image processing to studies of the musculoskeletal system

    NASA Astrophysics Data System (ADS)

    Hirsch, Bruce Elliot; Udupa, Jayaram K.; Siegler, Sorin; Winkelstein, Beth A.

    2009-10-01

    Three dimensional renditions of anatomical structures are commonly used to improve visualization, surgical planning, and patient education. However, such 3D images also contain information which is not readily apparent, and which can be mined to elucidate, for example, such parameters as joint kinematics, spacial relationships, and distortions of those relationships with movement. Here we describe two series of experiments which demonstrate the functional application of 3D imaging. The first concerns the joints of the ankle complex, where the usual description of motions in the talocrural joint is shown to be incomplete, and where the roles of the anterior talofibular and calcaneofibular ligaments are clarified in ankle sprains. Also, the biomechanical effects of two common surgical procedures for repairing torn ligaments were examined. The second series of experiments explores changes in the anatomical relationships between nerve elements and the cervical vertebrae with changes in neck position. They provide preliminary evidence that morphological differences may exist between asymptomatic subjects and patients with radiculopathy in certain positions, even when conventional imaging shows no difference.

  11. 3D functional ultrasound imaging of the cerebral visual system in rodents.

    PubMed

    Gesnik, Marc; Blaize, Kevin; Deffieux, Thomas; Gennisson, Jean-Luc; Sahel, José-Alain; Fink, Mathias; Picaud, Serge; Tanter, Mickaël

    2017-02-03

    3D functional imaging of the whole brain activity during visual task is a challenging task in rodents due to the complex tri-dimensional shape of involved brain regions and the fine spatial and temporal resolutions required to reveal the visual tract. By coupling functional ultrasound (fUS) imaging with a translational motorized stage and an episodic visual stimulation device, we managed to accurately map and to recover the activity of the visual cortices, the Superior Colliculus (SC) and the Lateral Geniculate Nuclei (LGN) in 3D. Cerebral Blood Volume (CBV) responses during visual stimuli were found to be highly correlated with the visual stimulus time profile in visual cortices (r=0.6), SC (r=0.7) and LGN (r=0.7). These responses were found dependent on flickering frequency and contrast, and optimal stimulus parameters for largest CBV increases were obtained. In particular, increasing the flickering frequency higher than 7Hz revealed a decrease of visual cortices response while the SC response was preserved. Finally, cross-correlation between CBV signals exhibited significant delays (d=0.35s +/-0.1s) between blood volume response in SC and visual cortices in response to our visual stimulus. These results emphasize the interest of fUS imaging as a whole brain neuroimaging modality for brain vision studies in rodent models.

  12. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  13. A model and simulation to predict the performance of angle-angle-range 3D flash ladar imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2004-11-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  14. A model and simulation to predict the performance of angle-angle-range 3D flash LADAR imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2005-10-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  15. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  16. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  17. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  18. User-guided segmentation of preterm neonate ventricular system from 3-D ultrasound images using convex optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; McLeod, Jonathan; Chen, Yimin; de Ribaupierre, Sandrine; Fenster, Aaron

    2015-02-01

    A three-dimensional (3-D) ultrasound (US) system has been developed to monitor the intracranial ventricular system of preterm neonates with intraventricular hemorrhage (IVH) and the resultant dilation of the ventricles (ventriculomegaly). To measure ventricular volume from 3-D US images, a semi-automatic convex optimization-based approach is proposed for segmentation of the cerebral ventricular system in preterm neonates with IVH from 3-D US images. The proposed semi-automatic segmentation method makes use of the convex optimization technique supervised by user-initialized information. Experiments using 58 patient 3-D US images reveal that our proposed approach yielded a mean Dice similarity coefficient of 78.2% compared with the surfaces that were manually contoured, suggesting good agreement between these two segmentations. Additional metrics, the mean absolute distance of 0.65 mm and the maximum absolute distance of 3.2 mm, indicated small distance errors for a voxel spacing of 0.22 × 0.22 × 0.22 mm(3). The Pearson correlation coefficient (r = 0.97, p < 0.001) indicated a significant correlation of algorithm-generated ventricular system volume (VSV) with the manually generated VSV. The calculated minimal detectable difference in ventricular volume change indicated that the proposed segmentation approach with 3-D US images is capable of detecting a VSV difference of 6.5 cm(3) with 95% confidence, suggesting that this approach might be used for monitoring IVH patients' ventricular changes using 3-D US imaging. The mean segmentation times of the graphics processing unit (GPU)- and central processing unit-implemented algorithms were 50 ± 2 and 205 ± 5 s for one 3-D US image, respectively, in addition to 120 ± 10 s for initialization, less than the approximately 35 min required by manual segmentation. In addition, repeatability experiments indicated that the intra-observer variability ranges from 6.5% to 7.5%, and the inter-observer variability is 8.5% in terms

  19. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  20. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  1. Integrated 3D macro-trapping and light-sheet imaging system

    NASA Astrophysics Data System (ADS)

    Yang, Zhengyi; Piksarv, Peeter; Ferrier, David E. K.; Gunn-Moore, Frank J.; Dholakia, Kishan

    2015-08-01

    Biological research requires high-speed and low-damage imaging techniques for live specimens in areas such as development study in embryos. Light sheet microscopy provides fast imaging speed whilst keeps the photo-damage and photo-blenching to minimum. Conventional sample embedding methods in light sheet imaging involves using agent such as agarose which potentially affects the behavior and the develop pattern of the specimens. Here we demonstrate integrating dual-beam trapping method into light sheet imaging system to confine and translate the specimen whilst light sheet images are taken. Tobacco plant cells as well as Spirobranchus lamarcki larva were trapped solely with optical force and sectional images were acquired. This now approach has the potential to extend the applications of light sheet imaging significantly.

  2. 3D cerebral MR image segmentation using multiple-classifier system.

    PubMed

    Amiri, Saba; Movahedi, Mohammad Mehdi; Kazemi, Kamran; Parsaei, Hossein

    2017-03-01

    The three soft brain tissues white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) identified in a magnetic resonance (MR) image via image segmentation techniques can aid in structural and functional brain analysis, brain's anatomical structures measurement and visualization, neurodegenerative disorders diagnosis, and surgical planning and image-guided interventions, but only if obtained segmentation results are correct. This paper presents a multiple-classifier-based system for automatic brain tissue segmentation from cerebral MR images. The developed system categorizes each voxel of a given MR image as GM, WM, and CSF. The algorithm consists of preprocessing, feature extraction, and supervised classification steps. In the first step, intensity non-uniformity in a given MR image is corrected and then non-brain tissues such as skull, eyeballs, and skin are removed from the image. For each voxel, statistical features and non-statistical features were computed and used a feature vector representing the voxel. Three multilayer perceptron (MLP) neural networks trained using three different datasets were used as the base classifiers of the multiple-classifier system. The output of the base classifiers was fused using majority voting scheme. Evaluation of the proposed system was performed using Brainweb simulated MR images with different noise and intensity non-uniformity and internet brain segmentation repository (IBSR) real MR images. The quantitative assessment of the proposed method using Dice, Jaccard, and conformity coefficient metrics demonstrates improvement (around 5 % for CSF) in terms of accuracy as compared to single MLP classifier and the existing methods and tools such FSL-FAST and SPM. As accurately segmenting a MR image is of paramount importance for successfully promoting the clinical application of MR image segmentation techniques, the improvement obtained by using multiple-classifier-based system is encouraging.

  3. A model and simulation to predict 3D imaging LADAR sensor systems performance in real-world type environments

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Russo, Leonard E.

    2006-08-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. Accurate methods to model and simulate performance from 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation here is developed expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) detector noise figure; 4) detector gain; 5) target attributes; 6) atmospheric transmission; 7) atmospheric backscatter; 8) atmospheric turbulence; 9) obscurants; 10) obscurant path length, and; 11) platform motion. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel. Here, noise sources and gain are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel for the entire array. Model outputs are 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array.

  4. Evolving technologies for growing, imaging and analyzing 3D root system architecture of crop plants.

    PubMed

    Piñeros, Miguel A; Larson, Brandon G; Shaff, Jon E; Schneider, David J; Falcão, Alexandre Xavier; Yuan, Lixing; Clark, Randy T; Craft, Eric J; Davis, Tyler W; Pradier, Pierre-Luc; Shaw, Nathanael M; Assaranurak, Ithipong; McCouch, Susan R; Sturrock, Craig; Bennett, Malcolm; Kochian, Leon V

    2016-03-01

    A plant's ability to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, can be strongly influenced by root system architecture (RSA), the three-dimensional distribution of the different root types in the soil. The ability to image, track and quantify these root system attributes in a dynamic fashion is a useful tool in assessing desirable genetic and physiological root traits. Recent advances in imaging technology and phenotyping software have resulted in substantive progress in describing and quantifying RSA. We have designed a hydroponic growth system which retains the three-dimensional RSA of the plant root system, while allowing for aeration, solution replenishment and the imposition of nutrient treatments, as well as high-quality imaging of the root system. The simplicity and flexibility of the system allows for modifications tailored to the RSA of different crop species and improved throughput. This paper details the recent improvements and innovations in our root growth and imaging system which allows for greater image sensitivity (detection of fine roots and other root details), higher efficiency, and a broad array of growing conditions for plants that more closely mimic those found under field conditions.

  5. Proposed NRC portable target case for short-range triangulation-based 3D imaging systems characterization

    NASA Astrophysics Data System (ADS)

    Carrier, Benjamin; MacKinnon, David; Cournoyer, Luc; Beraldin, J.-Angelo

    2011-03-01

    The National Research Council of Canada (NRC) is currently evaluating and designing artifacts and methods to completely characterize 3-D imaging systems. We have gathered a set of artifacts to form a low-cost portable case and provide a clearly-defined set of procedures for generating characteristic values using these artifacts. In its current version, this case is specifically designed for the characterization of short-range (standoff distance of 1 centimeter to 3 meters) triangulation-based 3-D imaging systems. The case is known as the "NRC Portable Target Case for Short-Range Triangulation-based 3-D Imaging Systems" (NRC-PTC). The artifacts in the case have been carefully chosen for their geometric, thermal, and optical properties. A set of characterization procedures are provided with these artifacts based on procedures either already in use or are based on knowledge acquired from various tests carried out by the NRC. Geometric dimensioning and tolerancing (GD&T), a well-known terminology in the industrial field, was used to define the set of tests. The following parameters of a system are characterized: dimensional properties, form properties, orientation properties, localization properties, profile properties, repeatability, intermediate precision, and reproducibility. A number of tests were performed in a special dimensional metrology laboratory to validate the capability of the NRC-PTC. The NRC-PTC will soon be subjected to reproducibility testing using an intercomparison evaluation to validate its use in different laboratories.

  6. Geodetic Reference System Transformations of 3d Archival Geospatial Data Using a Single SSC Terrasar-X Image

    NASA Astrophysics Data System (ADS)

    Vassilaki, D. I.; Stamos, A. A.

    2016-06-01

    Many older maps were created using reference coordinate systems which are no longer available, either because no information to a datum was taken in the first place or the reference system is forgotten. In other cases the relationship between the map's coordinate system is not known with precision, meaning that its absolute error is much larger than its relative error. In this paper the georeferencing of medium-scale maps is computed using a single TerraSAR-X image. A single TerraSAR-X image has high geolocation accuracy but it has no 3D information. The map, however, provides the missing 3D information, and thus it is possible to compute the georeferencing of the map using the TerraSAR-X geolocation information, assembling the information of both sources to produce 3D points in the reference system of the TerraSAR-X image. Two methods based on this concept are proposed. The methods are tested with real world examples and the results are promising for further research.

  7. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  8. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  9. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  10. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  11. Scanning all-fiber-optic endomicroscopy system for 3D nonlinear optical imaging of biological tissues

    PubMed Central

    Wu, Yicong; Leng, Yuxin; Xi, Jiefeng; Li, Xingde

    2009-01-01

    An extremely compact all-fiber-optic scanning endomicroscopy system was developed for two-photon fluorescence (TPF) and second harmonic generation (SHG) imaging of biological samples. A conventional double-clad fiber (DCF) was employed in the endomicroscope for single-mode femtosecond pulse delivery, multimode nonlinear optical signals collection and fast two-dimensional scanning. A single photonic bandgap fiber (PBF) with negative group velocity dispersion at two-photon excitation wavelength (i.e. ~810 nm) was used for pulse prechirping in replacement of a bulky grating/lens-based pulse stretcher. The combined use of DCF and PBF in the endomicroscopy system made the endomicroscope basically a plug-and-play unit. The excellent imaging ability of the extremely compact all-fiber-optic nonlinear optical endomicroscopy system was demonstrated by SHG imaging of rat tail tendon and depth-resolved TPF imaging of epithelial tissues stained with acridine orange. The preliminary results suggested the promising potential of this extremely compact all-fiber-optic endomicroscopy system for real-time assessment of both epithelial and stromal structures in luminal organs. PMID:19434122

  12. Optoacoustic system for 3D functional and molecular imaging in nude mice

    NASA Astrophysics Data System (ADS)

    Fronheiser, Matthew P.; Stein, Alan; Herzog, Don; Thompson, Scott; Liopo, Anton; Eghtedari, Mohammad; Motamedi, Massoud; Ermilov, Sergey; Conjusteau, Andre; Gharieb, Reda; Lacewell, Ron; Miller, Tom; Mehta, Ketan; Oraevsky, Alexander A.

    2008-02-01

    A three-dimensional laser optoacoustic imaging system was developed, which combines the advantages of optical spectroscopy and high resolution ultrasonic detection, to produce high contrast maps of optical absorbance in tissues. This system was tested in a nude mouse model of breast cancer and produced tissue images of tumors and vasculature. The imaging can utilize either optical properties of hemoglobin and oxyhemoglobin, which are the main endogenous tissue chromophores in the red and near-infrared spectral ranges, or exogenous contrast agent based on gold nanorods. Visualization of tissue molecules targeted by the contrast agent provides molecular information. Visulization of blood at multiple colors of light provides functional information on blood concentration and oxygen saturation. Optoacoustic imaging, using two or more laser illumination wavelengths, permits an assessment of the angiogenesis-related microvasculature, and thereby, an evaluation of the tumor stage and its metastatic potential. The optoacoustic imaging system was also used to generate molecular images of the malignancy-related receptors induced by the xenografts of BT474 mammary adenocarcinoma cells in nude mice. The development of the latter images was facilitated by the use of an optoacoustic contrast agent that utilizes gold nanorods conjugated to monoclonal antibody raised against HER2/neu antigens. These nanorods possess a very strong optical absorption peak that can be tuned in the near-infrared by changing their aspect ratio. The effective conversion of the optical energy into heat by the gold nanorods, followed by the thermal expansion of the surrounding water, makes these nanoparticles an effective optoacoustic contrast agent. Optical scattering methods and x-ray tomography may also benefit from the application of this contrast agent. Administration of the gold nanorod bioconjugates to mice resulted in an enhanced contrast of breast tumors relative the background of normal tissues

  13. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  14. Full-color holographic 3D imaging system using color optical scanning holography

    NASA Astrophysics Data System (ADS)

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  15. Calculation of the Slip System Activity in Deformed Zinc Single Crystals Using Digital 3-D Image Correlation Data

    SciTech Connect

    Florando, J; Rhee, M; Arsenlis, A; LeBlanc, M; Lassila, D

    2006-02-21

    A 3-D image correlation system, which measures the full-field displacements in 3 dimensions, has been used to experimentally determine the full deformation gradient matrix for two zinc single crystals. Based on the image correlation data, the slip system activity for the two crystals has been calculated. The results of the calculation show that for one crystal, only the primary slip system is active, which is consistent with traditional theory. The other crystal however, shows appreciable deformation on slip systems other than the primary. An analysis has been conducted which confirms the experimental observation that these other slip system deform in such a manner that the net result is slip which is approximately one third the magnitude and directly orthogonal to the primary system.

  16. [EOS imaging acquisition system : 2D/3D diagnostics of the skeleton].

    PubMed

    Tarhan, T; Froemel, D; Meurer, A

    2015-12-01

    The application spectrum of the EOS imaging acquisition system is versatile. It is especially useful in the diagnostics and planning of corrective surgical procedures in complex orthopedic cases. The application is indicated when assessing deformities and malpositions of the spine, pelvis and lower extremities. It can also be used in the assessment and planning of hip and knee arthroplasty. For the first time physicians have the opportunity to conduct examinations of the whole body under weight-bearing conditions in order to anticipate the effects of a planned surgical procedure on the skeletal system as a whole and therefore on the posture of the patient. Compared to conventional radiographic examination techniques, such as x-ray or computed tomography, the patient is exposed to much less radiation. Therefore, the pediatric application of this technique can be described as reasonable.

  17. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord.

    PubMed

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-02-17

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system.

  18. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    PubMed Central

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-01-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system. PMID:25686728

  19. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    NASA Astrophysics Data System (ADS)

    Fratini, Michela; Bukreeva, Inna; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spanò, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2015-02-01

    Faults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal system represents a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of ex-vivo mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with nor contrast agent nor sectioning and neither destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is very suitable for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries, in particular to resolve the entangled relationship between VN and neuronal system.

  20. A 3D High Frequency Array Based 16 Channel Photoacoustic Microscopy System for In Vivo Micro-vascular Imaging

    PubMed Central

    Zemp, Roger; Yen, Jesse; Wang, L.V.; Shung, K. Kirk

    2009-01-01

    This paper discusses the design of a novel photoacoustic microscopy imaging system with promise for studying the structure of tissue microvasculature for applications in visualizing angiogenesis. A new sixteen channel analog and digital high frequency array based photoacoustic microscopy system (PAM) was developed using an Nd:YLF pumped tunable dye laser, a 30MHz piezo composite linear array transducer and a custom multi-channel receiver electronics system. Using offline delay and sum beamforming and beamsteering, phantom images were obtained from a 6µm carbon fiber in water at a depth of 8mm. The measured -6dB lateral and axial spatial resolution of the system was 100±5µm and 45±5µm, respectively. The dynamic focusing capability of the system was demonstrated by imaging a composite carbon fiber matrix through a 12.5mm imaging depth. Next, 2-D in vivo images were formed of vessels around 100µm in diameter in the human hand. 3-D in vivo images were also formed of micro-vessels 3mm below the surface of the skin in two Sprague Dawley rats. PMID:19131292

  1. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  2. Research of range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Yang, Haitao; Zhao, Hongli; Youchen, Fan

    2016-10-01

    Laser image data-based target recognition technology is one of the key technologies of laser active imaging systems. This paper discussed the status quo of 3-D imaging development at home and abroad, analyzed the current technological bottlenecks, and built a prototype of range-gated systems to obtain a set of range-gated slice images, and then constructed the 3-D images of the target by binary method and centroid method, respectively, and by constructing different numbers of slice images explored the relationship between the number of images and the reconstruction accuracy in the 3-D image reconstruction process. The experiment analyzed the impact of two algorithms, binary method and centroid method, on the results of 3-D image reconstruction. In the binary method, a comparative analysis was made on the impact of different threshold values on the results of reconstruction, where 0.1, 0.2, 0.3 and adaptive threshold values were selected for 3-D reconstruction of the slice images. In the centroid method, 15, 10, 6, 3, and 2 images were respectively used to realize 3-D reconstruction. Experimental results showed that with the same number of slice images, the accuracy of centroid method was higher than the binary algorithm, and the binary algorithm had a large dependence on the selection of threshold; with the number of slice images dwindling, the accuracy of images reconstructed by centroid method continued to reduce, and at least three slice images were required in order to obtain one 3-D image.

  3. Occluded human recognition for a leader following system using 3D range and image data in forest environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Ilyas, Muhammad; Baeg, Seung-Ho; Park, Sangdeok

    2014-06-01

    This paper describes the occluded target recognition and tracking method for a leader-following system by fusing 3D range and image data acquired from 3D light detection and ranging (LIDAR) and a color camera installed on an autonomous vehicle in forest environment. During 3D data processing, the distance-based clustering method has an instinctive problem in close encounters. In the tracking phase, we divide an object tracking process into three phases based on occlusion scenario; before an occlusion (BO) phase, a partially or fully occlusion phase and after an occlusion (AO) phase. To improve the data association performance, we use camera's rich information to find correspondence among objects during above mentioned three phases of occlusion. In this paper, we solve a correspondence problem using the color features of human objects with the sum of squared distance (SSD) and the normalized cross correlation (NCC). The features are integrated with derived windows from Harris corner. The experimental results for a leader following on an autonomous vehicle are shown with LIDAR and camera for improving a data association problem in a multiple object tracking system.

  4. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  5. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  6. Evaluating the bending response of two osseointegrated transfemoral implant systems using 3D digital image correlation.

    PubMed

    Thompson, Melanie L; Backman, David; Branemark, Rickard; Mechefske, Chris K

    2011-05-01

    Osseointegrated transfemoral implants have been introduced as a prosthetic solution for above knee amputees. They have shown great promise, providing an alternative for individuals who could not be accommodated by conventional, socket-based prostheses; however, the occurrence of device failures is of concern. In an effort to improve the strength and longevity of the device, a new design has been proposed. This study investigates the mechanical behavior of the new taper-based assembly in comparison to the current hex-based connection for osseointegrated transfemoral implant systems. This was done to better understand the behavior of components under loading, in order to optimize the assembly specifications and improve the useful life of the system. Digital image correlation was used to measure surface strains on two assemblies during static loading in bending. This provided a means to measure deformation over the entire sample and identify critical locations as the assembly was subjected to a series of loading conditions. It provided a means to determine the effects of tightening specifications and connection geometry on the material response and mechanical behavior of the assemblies. Both osseoinegrated assemblies exhibited improved strength and mechanical performance when tightened to a level beyond the current specified tightening torque of 12 N m. This was shown by decreased strain concentration values and improved distribution of tensile strain. Increased tightening torque provides an improved connection between components regardless of design, leading to increased torque retention, decreased peak tensile strain values, and a more gradual, primarily compressive distribution of strains throughout the assembly.

  7. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  8. Mobile Biplane X-Ray Imaging System for Measuring 3D Dynamic Joint Motion During Overground Gait.

    PubMed

    Guan, Shanyuanye; Gray, Hans A; Keynejad, Farzad; Pandy, Marcus G

    2016-01-01

    Most X-ray fluoroscopy systems are stationary and impose restrictions on the measurement of dynamic joint motion; for example, knee-joint kinematics during gait is usually measured with the subject ambulating on a treadmill. We developed a computer-controlled, mobile, biplane, X-ray fluoroscopy system to track human body movement for high-speed imaging of 3D joint motion during overground gait. A robotic gantry mechanism translates the two X-ray units alongside the subject, tracking and imaging the joint of interest as the subject moves. The main aim of the present study was to determine the accuracy with which the mobile imaging system measures 3D knee-joint kinematics during walking. In vitro experiments were performed to measure the relative positions of the tibia and femur in an intact human cadaver knee and of the tibial and femoral components of a total knee arthroplasty (TKA) implant during simulated overground gait. Accuracy was determined by calculating mean, standard deviation and root-mean-squared errors from differences between kinematic measurements obtained using volumetric models of the bones and TKA components and reference measurements obtained from metal beads embedded in the bones. Measurement accuracy was enhanced by the ability to track and image the joint concurrently. Maximum root-mean-squared errors were 0.33 mm and 0.65° for translations and rotations of the TKA knee and 0.78 mm and 0.77° for translations and rotations of the intact knee, which are comparable to results reported for treadmill walking using stationary biplane systems. System capability for in vivo joint motion measurement was also demonstrated for overground gait.

  9. NeuroTerrain – a client-server system for browsing 3D biomedical image data sets

    PubMed Central

    Gustafson, Carl; Bug, William J; Nissanov, Jonathan

    2007-01-01

    Background Three dimensional biomedical image sets are becoming ubiquitous, along with the canonical atlases providing the necessary spatial context for analysis. To make full use of these 3D image sets, one must be able to present views for 2D display, either surface renderings or 2D cross-sections through the data. Typical display software is limited to presentations along one of the three orthogonal anatomical axes (coronal, horizontal, or sagittal). However, data sets precisely oriented along the major axes are rare. To make fullest use of these datasets, one must reasonably match the atlas' orientation; this involves resampling the atlas in planes matched to the data set. Traditionally, this requires the atlas and browser reside on the user's desktop; unfortunately, in addition to being monolithic programs, these tools often require substantial local resources. In this article, we describe a network-capable, client-server framework to slice and visualize 3D atlases at off-axis angles, along with an open client architecture and development kit to support integration into complex data analysis environments. Results Here we describe the basic architecture of a client-server 3D visualization system, consisting of a thin Java client built on a development kit, and a computationally robust, high-performance server written in ANSI C++. The Java client components (NetOStat) support arbitrary-angle viewing and run on readily available desktop computers running Mac OS X, Windows XP, or Linux as a downloadable Java Application. Using the NeuroTerrain Software Development Kit (NT-SDK), sophisticated atlas browsing can be added to any Java-compatible application requiring as little as 50 lines of Java glue code, thus making it eminently re-useable and much more accessible to programmers building more complex, biomedical data analysis tools. The NT-SDK separates the interactive GUI components from the server control and monitoring, so as to support development of non

  10. High-quality 3-D coronary artery imaging on an interventional C-arm x-ray system

    SciTech Connect

    Hansis, Eberhard; Carroll, John D.; Schaefer, Dirk; Doessel, Olaf; Grass, Michael

    2010-04-15

    Purpose: Three-dimensional (3-D) reconstruction of the coronary arteries during a cardiac catheter-based intervention can be performed from a C-arm based rotational x-ray angiography sequence. It can support the diagnosis of coronary artery disease, treatment planning, and intervention guidance. 3-D reconstruction also enables quantitative vessel analysis, including vessel dynamics from a time-series of reconstructions. Methods: The strong angular undersampling and motion effects present in gated cardiac reconstruction necessitate the development of special reconstruction methods. This contribution presents a fully automatic method for creating high-quality coronary artery reconstructions. It employs a sparseness-prior based iterative reconstruction technique in combination with projection-based motion compensation. Results: The method is tested on a dynamic software phantom, assessing reconstruction accuracy with respect to vessel radii and attenuation coefficients. Reconstructions from clinical cases are presented, displaying high contrast, sharpness, and level of detail. Conclusions: The presented method enables high-quality 3-D coronary artery imaging on an interventional C-arm system.

  11. A 3-D ultrasound imaging robotic system to detect and quantify lower limb arterial stenoses: in vivo feasibility.

    PubMed

    Janvier, Marie-Ange; Merouche, Samir; Allard, Louise; Soulez, Gilles; Cloutier, Guy

    2014-01-01

    The degree of stenosis is the most common criterion used to assess the severity of lower limb peripheral arterial disease. Two-dimensional ultrasound (US) imaging is the first-line diagnostic method for investigating lesions, but it cannot render a 3-D map of the entire lower limb vascular tree required for therapy planning. We propose a prototype 3-D US imaging robotic system that can potentially reconstruct arteries from the iliac in the lower abdomen down to the popliteal behind the knee. A realistic multi-modal vascular phantom was first conceptualized to evaluate the system's performance. Geometric accuracies were assessed in surface reconstruction and cross-sectional area in comparison to computed tomography angiography (CTA). A mean surface map error of 0.55 mm was recorded for 3-D US vessel representations, and cross-sectional lumen areas were congruent with CTA geometry. In the phantom study, stenotic lesions were properly localized and severe stenoses up to 98.3% were evaluated with -3.6 to 11.8% errors. The feasibility of the in vivo system in reconstructing the normal femoral artery segment of a volunteer and detecting stenoses on a femoral segment of a patient was also investigated and compared with that of CTA. Together, these results encourage future developments to increase the robot's potential to adequately represent lower limb vessels and clinically evaluate stenotic lesions for therapy planning and recurrent non-invasive and non-ionizing follow-up examinations.

  12. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    NASA Astrophysics Data System (ADS)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  13. A semi-automated 3-D annotation method for breast ultrasound imaging: system development and feasibility study on phantoms.

    PubMed

    Jiang, Wei-wei; Li, An-hua; Zheng, Yong-Ping

    2014-02-01

    Spatial annotation is an essential step in breast ultrasound imaging, because the follow-up diagnosis and treatment are based on this annotation. However, the current method for annotation is manual and highly dependent on the operator's experience. Moreover, important spatial information, such as the probe tilt angle, cannot be indicated in the clinical 2-D annotations. To solve these problems, we developed a semi-automated 3-D annotation method for breast ultrasound imaging. A spatial sensor was fixed on an ultrasound probe to obtain the image spatial data. Three-dimensional virtual models of breast and probe were used to annotate image locations. After the reference points were recorded, this system displayed the image annotations automatically. Compared with the conventional manual annotation method, this new annotation system has higher accuracy as indicated by the phantom test results. In addition, this new annotation method has good repeatability, with intra-class correlation coefficients of 0.907 (average variation: ≤3.45%) and 0.937 (average variation: ≤2.85%) for the intra-rater and inter-rater tests, respectively. Breast phantom experiments simulating clinical breast scanning further indicated the feasibility of this system for clinical applications. This new annotation method is expected to facilitate more accurate, intuitive and rapid breast ultrasound diagnosis.

  14. Clinical outcomes following spinal fusion using an intraoperative computed tomographic 3D imaging system.

    PubMed

    Xiao, Roy; Miller, Jacob A; Sabharwal, Navin C; Lubelski, Daniel; Alentado, Vincent J; Healy, Andrew T; Mroz, Thomas E; Benzel, Edward C

    2017-03-03

    OBJECTIVE Improvements in imaging technology have steadily advanced surgical approaches. Within the field of spine surgery, assistance from the O-arm Multidimensional Surgical Imaging System has been established to yield superior accuracy of pedicle screw insertion compared with freehand and fluoroscopic approaches. Despite this evidence, no studies have investigated the clinical relevance associated with increased accuracy. Accordingly, the objective of this study was to investigate the clinical outcomes following thoracolumbar spinal fusion associated with O-arm-assisted navigation. The authors hypothesized that increased accuracy achieved with O-arm-assisted navigation decreases the rate of reoperation secondary to reduced hardware failure and screw misplacement. METHODS A consecutive retrospective review of all patients who underwent open thoracolumbar spinal fusion at a single tertiary-care institution between December 2012 and December 2014 was conducted. Outcomes assessed included operative time, length of hospital stay, and rates of readmission and reoperation. Mixed-effects Cox proportional hazards modeling, with surgeon as a random effect, was used to investigate the association between O-arm-assisted navigation and postoperative outcomes. RESULTS Among 1208 procedures, 614 were performed with O-arm-assisted navigation, 356 using freehand techniques, and 238 using fluoroscopic guidance. The most common indication for surgery was spondylolisthesis (56.2%), and most patients underwent a posterolateral fusion only (59.4%). Although O-arm procedures involved more vertebral levels compared with the combined freehand/fluoroscopy cohort (4.79 vs 4.26 vertebral levels; p < 0.01), no significant differences in operative time were observed (4.40 vs 4.30 hours; p = 0.38). Patients who underwent an O-arm procedure experienced shorter hospital stays (4.72 vs 5.43 days; p < 0.01). O-arm-assisted navigation trended toward predicting decreased risk of spine

  15. A volume of intersection approach for on-the-fly system matrix calculation in 3D PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Lougovski, A.; Hofheinz, F.; Maus, J.; Schramm, G.; Will, E.; van den Hoff, J.

    2014-02-01

    The aim of this study is the evaluation of on-the-fly volume of intersection computation for system’s geometry modelling in 3D PET image reconstruction. For this purpose we propose a simple geometrical model in which the cubic image voxels on the given Cartesian grid are approximated with spheres and the rectangular tubes of response (ToRs) are approximated with cylinders. The model was integrated into a fully 3D list-mode PET reconstruction for performance evaluation. In our model the volume of intersection between a voxel and the ToR is only a function of the impact parameter (the distance between voxel centre to ToR axis) but is independent of the relative orientation of voxel and ToR. This substantially reduces the computational complexity of the system matrix calculation. Based on phantom measurements it was determined that adjusting the diameters of the spherical voxel size and the ToR in such a way that the actual voxel and ToR volumes are conserved leads to the best compromise between high spatial resolution, low noise, and suppression of Gibbs artefacts in the reconstructed images. Phantom as well as clinical datasets from two different PET systems (Siemens ECAT HR+ and Philips Ingenuity-TF PET/MR) were processed using the developed and the respective vendor-provided (line of intersection related) reconstruction algorithms. A comparison of the reconstructed images demonstrated very good performance of the new approach. The evaluation showed the respective vendor-provided reconstruction algorithms to possess 34-41% lower resolution compared to the developed one while exhibiting comparable noise levels. Contrary to explicit point spread function modelling our model has a simple straight-forward implementation and it should be easy to integrate into existing reconstruction software, making it competitive to other existing resolution recovery techniques.

  16. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  17. Development of a 2D Image Reconstruction and Viewing System for Histological Images from Multiple Tissue Blocks: Towards High-Resolution Whole-Organ 3D Histological Images.

    PubMed

    Hashimoto, Noriaki; Bautista, Pinky A; Haneishi, Hideaki; Snuderl, Matija; Yagi, Yukako

    2016-01-01

    High-resolution 3D histology image reconstruction of the whole brain organ starts from reconstructing the high-resolution 2D histology images of a brain slice. In this paper, we introduced a method to automatically align the histology images of thin tissue sections cut from the multiple paraffin-embedded tissue blocks of a brain slice. For this method, we employed template matching and incorporated an optimization technique to further improve the accuracy of the 2D reconstructed image. In the template matching, we used the gross image of the brain slice as a reference to the reconstructed 2D histology image of the slice, while in the optimization procedure, we utilized the Jaccard index as the metric of the reconstruction accuracy. The results of our experiment on the initial 3 different whole-brain tissue slices showed that while the method works, it is also constrained by tissue deformations introduced during the tissue processing and slicing. The size of the reconstructed high-resolution 2D histology image of a brain slice is huge, and designing an image viewer that makes particularly efficient use of the computing power of a standard computer used in our laboratories is of interest. We also present the initial implementation of our 2D image viewer system in this paper.

  18. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  19. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  20. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  1. Evaluation of the Accuracy of a 3D Surface Imaging System for Patient Setup in Head and Neck Cancer Radiotherapy

    SciTech Connect

    Gopan, Olga; Wu Qiuwen

    2012-10-01

    Purpose: To evaluate the accuracy of three-dimensional (3D) surface imaging system (AlignRT) registration algorithms for head-and-neck cancer patient setup during radiotherapy. Methods and Materials: Eleven patients, each undergoing six repeated weekly helical computed tomography (CT) scans during treatment course (total 77 CTs including planning CT), were included in the study. Patient surface images used in AlignRT registration were not captured by the 3D cameras; instead, they were derived from skin contours from these CTs, thereby eliminating issues with immobilization masks. The results from surface registrations in AlignRT based on CT skin contours were compared to those based on bony anatomy registrations in Pinnacle{sup 3}, which was considered the gold standard. Both rigid and nonrigid types of setup errors were analyzed, and the effect of tumor shrinkage was investigated. Results: The maximum registration errors in AlignRT were 0.2 Degree-Sign for rotations and 0.7 mm for translations in all directions. The rigid alignment accuracy in the head region when applied to actual patient data was 1.1 Degree-Sign , 0.8 Degree-Sign , and 2.2 Degree-Sign in rotation and 4.5, 2.7, and 2.4 mm in translation along the vertical, longitudinal, and lateral axes at 90% confidence level. The accuracy was affected by the patient's weight loss during treatment course, which was patient specific. Selectively choosing surface regions improved registration accuracy. The discrepancy for nonrigid registration was much larger at 1.9 Degree-Sign , 2.4 Degree-Sign , and 4.5 Degree-Sign and 10.1, 11.9, and 6.9 mm at 90% confidence level. Conclusions: The 3D surface imaging system is capable of detecting rigid setup errors with good accuracy for head-and-neck cancer. Further investigations are needed to improve the accuracy in detecting nonrigid setup errors.

  2. Advances and considerations in technologies for growing, imaging, and analyzing 3-D root system architecture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The ability of a plant to mine the soil for nutrients and water is determined by how, where, and when roots are arranged in the soil matrix. The capacity of plant to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, is affected by root system architectu...

  3. Ultrasonic 3D imaging system for the automated application in milling machines

    NASA Astrophysics Data System (ADS)

    Schmitt, Robert; Hafner, Philip

    2007-04-01

    In order to meet the requirements of rising flexibility and an automated material inspection in a small batch production environment, an automatically changeable ultrasound sensor tool for milling machines has been developed. This system enables an automated ultrasonic inspection of varying parts to be carried out directly on a machining center and visualizes hidden geometries and material imperfections three-dimensionally. The sensor tool is based on commercialized ultrasonic squirter-probes, which are able to use the cooling lubricant for sound coupling. During the measurement recordings, the milling machine's axes move the sensor numerical-controlled across the workpiece. By this means, automated material inspection tasks can be performed very cost-effectively without setting up separate testing facilities. 4- or 5-axes kinematics capacitates the system to check even very complex-shaped parts.

  4. Development and Demonstration of a Networked Telepathology 3-D Imaging, Databasing, and Communication System

    DTIC Science & Technology

    1996-10-01

    synthetic aperture radar (SAR) imagery. Additional activities have involved computer simulation of a sensor fusion system and incorporating multiple ...t 020 Aortal Ann Sclerosis —3 001 Brain (Mouse)—2 021 Aortal Are» Sdaroalo—4 00" Brain (UOUM)—3 022 Aortal An* Sdoroslo—S 003 Brain (Mora)—1 023...NONAMEB37 Oil Aortal Aren Sdoroslo— 1 03« NOMAME03I 019 Aortal Art* Sclerosis — 2 039 NONAMEttS W3BM BB Era CBEI m EH flgng EH EB «HI Operation

  5. 3D noninvasive, high-resolution imaging using a photoacoustic tomography (PAT) system and rapid wavelength-cycling lasers

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin; Gross, Daniel; Klosner, Marc; Chan, Gary; Wu, Chunbai; Heller, Donald F.

    2015-05-01

    Globally, cancer is a major health issue as advances in modern medicine continue to extend the human life span. Breast cancer ranks second as a cause of cancer death in women in the United States. Photoacoustic (PA) imaging (PAI) provides high molecular contrast at greater depths in tissue without the use of ionizing radiation. In this work, we describe the development of a PA tomography (PAT) system and a rapid wavelength-cycling Alexandrite laser designed for clinical PAI applications. The laser produces 450 mJ/pulse at 25 Hz to illuminate the entire breast, which eliminates the need to scan the laser source. Wavelength cycling provides a pulse sequence in which the output wavelength repeatedly alternates between 755 nm and 797 nm rapidly within milliseconds. We present imaging results of breast phantoms with inclusions of different sizes at varying depths, obtained with this laser source, a 5-MHz 128-element transducer and a 128-channel Verasonics system. Results include PA images and 3D reconstruction of the breast phantom at 755 and 797 nm, delineating the inclusions that mimic tumors in the breast.

  6. Imaging the Roots of Geothermal Systems: 3-D Inversion of Magnetotelluric Array Data in the Taupo Volcanic Zone, New Zealand

    NASA Astrophysics Data System (ADS)

    Bertrand, E. A.; Caldwell, G.; Bannister, S. C.; Hill, G.; Bennie, S.

    2013-12-01

    The Taupo Volcanic Zone (TVZ), located in the central North Island of New Zealand, is a rifted arc that contains more than 20 liquid-dominated high-temperature geothermal systems, which together discharge ~4.2 GW of heat at the surface. The shallow (upper ~500 m) extent of these geothermal systems is marked by low-resistivity, mapped by tens-of-thousands of DC resistivity measurements collected throughout the 1970's and 80's. Conceptual models of heat transport through the brittle crust of the TVZ link these low-resistivity anomalies to the tops of vertically ascending plumes of convecting hydrothermal fluid. Recently, data from a 40-site array of broadband seismometers with ~4 km station spacing, and an array of 270 broadband magnetotelluric (MT) measurements with ~2 km station spacing, have been collected in the south-eastern part of the TVZ in an experiment to image the deep structure (or roots) of the geothermal systems in this region. Unlike DC resistivity, these MT measurements are capable of resolving the resistivity structure of the Earth to depths of 10 km or more. 2-D and 3-D models of subsets of these MT data have been used to provide the first-ever images of quasi-vertical low-resistivity zones (at depths of 3-7 km) that connect with the near-surface geothermal fields. These low-resistivity zones are interpreted to represent convection plumes of high-temperature fluids ascending within fractures, which supply heat to the overlying geothermal fields. At the Rotokawa, Ngatamariki and Ohaaki geothermal fields, these plumes extend to a broad layer of low-resistivity, inferred to represent a magmatic, basal heat source located below the seismogenic zone (at ~7-8 km depth) that drives convection in the brittle crust above. Little is known about the mechanisms that transfer heat into the hydrothermal regime. However, at Rotokawa, new 3-D resistivity models image a vertical low-resistivity zone that lies directly beneath the geothermal field. The top of this

  7. Active segmentation of 3D axonal images.

    PubMed

    Muralidhar, Gautam S; Gopinath, Ajay; Bovik, Alan C; Ben-Yakar, Adela

    2012-01-01

    We present an active contour framework for segmenting neuronal axons on 3D confocal microscopy data. Our work is motivated by the need to conduct high throughput experiments involving microfluidic devices and femtosecond lasers to study the genetic mechanisms behind nerve regeneration and repair. While most of the applications for active contours have focused on segmenting closed regions in 2D medical and natural images, there haven't been many applications that have focused on segmenting open-ended curvilinear structures in 2D or higher dimensions. The active contour framework we present here ties together a well known 2D active contour model [5] along with the physics of projection imaging geometry to yield a segmented axon in 3D. Qualitative results illustrate the promise of our approach for segmenting neruonal axons on 3D confocal microscopy data.

  8. 3-D imaging of the CNS.

    PubMed

    Runge, V M; Gelblum, D Y; Wood, M L

    1990-01-01

    3-D gradient echo techniques, and in particular FLASH, represent a significant advance in MR imaging strategy allowing thin section, high resolution imaging through a large region of interest. Anatomical areas of application include the brain, spine, and extremities, although the majority of work to date has been performed in the brain. Superior T1 contrast and thus sensitivity to the presence of GdDTPA is achieved with 3-D FLASH when compared to 2-D spin echo technique. There is marked arterial and venous enhancement following Gd DTPA administration on 3-D FLASH, a less common finding with 2-D spin echo. Enhancement of the falx and tentorium is also more prominent. From a single data acquisition, requiring less than 11 min of scan time, high resolution reformatted sagittal, coronal, and axial images can obtained in addition to sections in any arbitrary plane. Tissue segmentation techniques can be applied and lesions displayed in three dimensions. These results may lead to the replacement of 2-D spin echo with 3-D FLASH for high resolution T1-weighted MR imaging of the CNS, particularly in the study of mass lesions and structural anomalies. The application of similar T2-weighted gradient echo techniques may follow, however the signal-to-noise ratio which can be achieved remains a potential limitation.

  9. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  10. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  12. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  13. Walker Ranch 3D seismic images

    SciTech Connect

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  14. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  15. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  16. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  17. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    NASA Astrophysics Data System (ADS)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  18. Feasibility of 3D harmonic contrast imaging.

    PubMed

    Voormolen, M M; Bouakaz, A; Krenning, B J; Lancée, C T; ten Cate, F J; de Jong, N

    2004-04-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suitable for contrast imaging. In this study the feasibility of 3D harmonic contrast imaging is evaluated in vitro. A commercially available tissue mimicking flow phantom was used in combination with Sonovue. Backscatter power spectra from a tissue and contrast region of interest were calculated from recorded radio frequency data. The spectra and the extracted contrast to tissue ratio from these spectra were used to optimize the excitation frequency, the pulse length and the receive filter settings of the transducer. Frequencies ranging from 1.66 to 2.35 MHz and pulse lengths of 1.5, 2 and 2.5 cycles were explored. An increase of more than 15 dB in the contrast to tissue ratio was found around the second harmonic compared with the fundamental level at an optimal excitation frequency of 1.74 MHz and a pulse length of 2.5 cycles. Using the optimal settings for 3D harmonic contrast recordings volume measurements of a left ventricular shaped agar phantom were performed. Without contrast the extracted volume data resulted in a volume error of 1.5%, with contrast an accuracy of 3.8% was achieved. The results show the feasibility of accurate volume measurements from 3D harmonic contrast images. Further investigations will include the clinical evaluation of the presented technique for improved assessment of the heart.

  19. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  20. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  1. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  2. An ultrasound tomography system with polyvinyl alcohol (PVA) moldings for coupling: in vivo results for 3-D pulse-echo imaging of the female breast.

    PubMed

    Koch, Andreas; Stiller, Florian; Lerch, Reinhard; Ermert, Helmut

    2015-02-01

    Full-angle spatial compounding (FASC) is a concept for pulse-echo imaging using an ultrasound tomography (UST) system. With FASC, resolution is increased and speckles are suppressed by averaging pulse-echo data from 360°. In vivo investigations have already shown a great potential for 2-D FASC in the female breast as well as for finger-joint imaging. However, providing a small number of images of parallel cross-sectional planes with enhanced image quality is not sufficient for diagnosis. Therefore, volume data (3-D) is needed. For this purpose, we further developed our UST add-on system to automatically rotate a motorized array (3-D probe) around the object of investigation. Full integration of external motor and ultrasound electronics control in a custom-made program allows acquisition of 3-D pulse-echo RF datasets within 10 min. In case of breast cancer imaging, this concept also enables imaging of near-thorax tissue regions which cannot be achieved by 2-D FASC. Furthermore, moldings made of polyvinyl alcohol hydrogel (PVA-H) have been developed as a new acoustic coupling concept. It has a great potential to replace the water bath technique in UST, which is a critical concept with respect to clinical investigations. In this contribution, we present in vivo results for 3-D FASC applied to imaging a female breast which has been placed in a PVA-H molding during data acquisition. An algorithm is described to compensate time-of-flight and consider refraction at the water-PVA-H molding and molding-tissue interfaces. Therefore, the mean speed of sound (SOS) for the breast tissue is estimated with an image-based method. Our results show that the PVA-H molding concept is applicable and feasible and delivers good results. 3-D FASC is superior to 2-D FASC and provides 3-D volume data at increased image quality.

  3. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  4. Progresses in 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Navarro, Héctor; Pons, Amparo; Javidi, Bahram

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  5. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  6. High-resolution 3-D P-wave tomographic imaging of the shallow magmatic system of Erebus volcano, Antarctica

    NASA Astrophysics Data System (ADS)

    Zandomeneghi, D.; Aster, R. C.; Barclay, A. H.; Chaput, J. A.; Kyle, P. R.

    2011-12-01

    Erebus volcano (Ross Island), the most active volcano in Antarctica, is characterized by a persistent phonolitic lava lake at its summit and a wide range of seismic signals associated with its underlying long-lived magmatic system. The magmatic structure in a 3 by 3 km area around the summit has been imaged using high-quality data from a seismic tomographic experiment carried out during the 2008-2009 austral field season (Zandomeneghi et al., 2010). An array of 78 short period, 14 broadband, and 4 permanent Mount Erebus Volcano Observatory seismic stations and a program of 12 shots were used to model the velocity structure in the uppermost kilometer over the volcano conduit. P-wave travel times were inverted for the 3-D velocity structure using the shortest-time ray tracing (50-m grid spacing) and LSQR inversion (100-m node spacing) of a tomography code (Toomey et al., 1994) that allows for the inclusion of topography. Regularization is controlled by damping and smoothing weights and smoothing lengths, and addresses complications that are inherent in a strongly heterogeneous medium featuring rough topography and a dense parameterization and distribution of receivers/sources. The tomography reveals a composite distribution of very high and low P-wave velocity anomalies (i.e., exceeding 20% in some regions), indicating a complex sub-lava-lake magmatic geometry immediately beneath the summit region and in surrounding areas, as well as the presence of significant high velocity shallow regions. The strongest and broadest low velocity zone is located W-NW of the crater rim, indicating the presence of an off-axis shallow magma body. This feature spatially corresponds to the inferred centroid source of VLP signals associated with Strombolian eruptions and lava lake refill (Aster et al., 2008). Other resolved structures correlate with the Side Crater and with lineaments of ice cave thermal anomalies extending NE and SW of the rim. High velocities in the summit area possibly

  7. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    NASA Astrophysics Data System (ADS)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  8. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  9. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  10. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    PubMed Central

    Zhou, Jian; Qi, Jinyi

    2014-01-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D TOF PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon's raytracer, we propose another more simplified geometrical projector based on the Bresenham's raytracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a nonfactored model such as the analytical model while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve optimal reconstruction performance based on a sparse factorization model with an only image domain resolution model. PMID:24434568

  11. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model.

    PubMed

    Zhou, Jian; Qi, Jinyi

    2014-02-07

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D time-of-flight PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon's ray-tracer, we propose another more simplified geometrical projector based on the Bresenham's ray-tracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a non-factored model, while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve the optimal reconstruction performance based on a sparse factorization model with an image domain resolution model.

  12. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  13. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  14. 3-D Extensions for Trustworthy Systems

    DTIC Science & Technology

    2011-01-01

    3- D Extensions for Trustworthy Systems (Invited Paper) Ted Huffmire∗, Timothy Levin∗, Cynthia Irvine∗, Ryan Kastner† and Timothy Sherwood...address these problems, we propose an approach to trustworthy system development based on 3- D integration, an emerging chip fabrication technique in...which two or more integrated circuit dies are fabricated individually and then combined into a single stack using vertical conductive posts. With 3- D

  15. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  16. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  17. Developing Customized Dental Miniscrew Surgical Template from Thermoplastic Polymer Material Using Image Superimposition, CAD System, and 3D Printing

    PubMed Central

    Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin

    2017-01-01

    This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews. PMID:28280726

  18. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  19. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2016-07-12

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  20. 3D EFT imaging with planar electrode array: Numerical simulation

    NASA Astrophysics Data System (ADS)

    Tuykin, T.; Korjenevsky, A.

    2010-04-01

    Electric field tomography (EFT) is the new modality of the quasistatic electromagnetic sounding of conductive media recently investigated theoretically and realized experimentally. The demonstrated results pertain to 2D imaging with circular or linear arrays of electrodes (and the linear array provides quite poor quality of imaging). In many applications 3D imaging is essential or can increase value of the investigation significantly. In this report we present the first results of numerical simulation of the EFT imaging system with planar array of electrodes which allows 3D visualization of the subsurface conductivity distribution. The geometry of the system is similar to the geometry of our EIT breast imaging system providing 3D conductivity imaging in form of cross-sections set with different depth from the surface. The EFT principle of operation and reconstruction approach differs from the EIT system significantly. So the results of numerical simulation are important to estimate if comparable quality of imaging is possible with the new contactless method. The EFT forward problem is solved using finite difference time domain (FDTD) method for the 8×8 square electrodes array. The calculated results of measurements are used then to reconstruct conductivity distributions by the filtered backprojections along electric field lines. The reconstructed images of the simple test objects are presented.

  1. Anthropometric precision and accuracy of digital three-dimensional photogrammetry: comparing the Genex and 3dMD imaging systems with one another and with direct anthropometry.

    PubMed

    Weinberg, Seth M; Naidoo, Sybill; Govier, Daniel P; Martin, Rick A; Kane, Alex A; Marazita, Mary L

    2006-05-01

    A variety of commercially available three-dimensional (3D) surface imaging systems are currently in use by craniofacial specialists. Little is known, however, about how measurement data generated from alternative 3D systems compare, specifically in terms of accuracy and precision. The purpose of this study was to compare anthropometric measurements obtained by way of two different digital 3D photogrammetry systems (Genex and 3dMD) as well as direct anthropometry and to evaluate intraobserver precision across these three methods. On a sample of 18 mannequin heads, 12 linear distances were measured twice by each method. A two-factor repeated measures analysis of variance was used to test simultaneously for mean differences in precision across methods. Additional descriptive statistics (e.g., technical error of measurement [TEM]) were used to quantify measurement error magnitude. Statistically significant (P < 0.05) mean differences were observed across methods for nine anthropometric variables; however, the magnitude of these differences was consistently at the submillimeter level. No significant differences were noted for precision. Moreover, the magnitude of imprecision was determined to be very small, with TEM scores well under 1 mm, and intraclass correlation coefficients ranging from 0.98 to 1. Results indicate that overall mean differences across these three methods were small enough to be of little practical importance. In terms of intraobserver precision, all methods fared equally well. This study is the first attempt to simultaneously compare 3D surface imaging systems directly with one another and with traditional anthropometry. Results suggest that craniofacial surface data obtained by way of alternative 3D photogrammetric systems can be combined or compared statistically.

  2. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  3. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  4. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  5. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  6. Developments in molecular SIMS depth profiling and 3D imaging of biological systems using polyatomic primary ions.

    PubMed

    Fletcher, John S; Lockyer, Nicholas P; Vickerman, John C

    2011-01-01

    In principle mass spectral imaging has enormous potential for discovery applications in biology. The chemical specificity of mass spectrometry combined with spatial analysis capabilities of liquid metal cluster beams and the high yields of polyatomic ion beams should present unprecedented ability to spatially locate molecular chemistry in the 100 nm range. However, although metal cluster ion beams have greatly increased yields in the m/z range up to 1000, they still have to be operated under the static limit and even in most favorable cases maximum yields for molecular species from 1 µm pixels are frequently below 20 counts. However, some very impressive molecular imaging analysis has been accomplished under these conditions. Nevertheless although molecular ions of lipids have been detected and correlation with biology is obtained, signal levels are such that lateral resolution must be sacrificed to provide a sufficient signal to image. To obtain useful spatial resolution detection below 1 µm is almost impossible. Too few ions are generated! The review shows that the application of polyatomic primary ions with their low damage cross-sections offers hope of a new approach to molecular SIMS imaging by accessing voxels rather than pixels to thereby increase the dynamic signal range in 2D imaging and to extend the analysis to depth profiling and 3D imaging. Recent data on cells and tissue analysis suggest that there is, in consequence, the prospect that a wider chemistry might be accessible within a sub-micron area and as a function of depth. However, these advances are compromised by the pulsed nature of current ToF-SIMS instruments. The duty cycle is very low and results in excessive analysis times, and maximum mass resolution is incompatible with maximum spatial resolution. New instrumental directions are described that enable a dc primary beam to be used that promises to be able to take full advantage of all the capabilities of the polyatomic ion beam. Some new

  7. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  8. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  9. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  10. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  11. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  12. A 3D digital medical photography system in paediatric medicine.

    PubMed

    Williams, Susanne K; Ellis, Lloyd A; Williams, Gigi

    2008-01-01

    In 2004, traditional clinical photography services at the Educational Resource Centre were extended using new technology. This paper describes the establishment of a 3D digital imaging system in a paediatric setting at the Royal Children's Hospital, Melbourne.

  13. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  14. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  15. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  16. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  17. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  18. Evaluation of a prototype 3D ultrasound system for multimodality imaging of cervical nodes for adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle; Fava, Palma; Cury, Fabio; Vuong, Te; Falco, Tony; Verhaegen, Frank

    2007-03-01

    Sonography has good topographic accuracy for superficial lymph node assessment in patients with head and neck cancers. It is therefore an ideal non-invasive tool for precise inter-fraction volumetric analysis of enlarged cervical nodes. In addition, when registered with computed tomography (CT) images, ultrasound information may improve target volume delineation and facilitate image-guided adaptive radiation therapy. A feasibility study was developed to evaluate the use of a prototype ultrasound system capable of three dimensional visualization and multi-modality image fusion for cervical node geometry. A ceiling-mounted optical tracking camera recorded the position and orientation of a transducer in order to synchronize the transducer's position with respect to the room's coordinate system. Tracking systems were installed in both the CT-simulator and radiation therapy treatment rooms. Serial images were collected at the time of treatment planning and at subsequent treatment fractions. Volume reconstruction was performed by generating surfaces around contours. The quality of the spatial reconstruction and semi-automatic segmentation was highly dependent on the system's ability to track the transducer throughout each scan procedure. The ultrasound information provided enhanced soft tissue contrast and facilitated node delineation. Manual segmentation was the preferred method to contour structures due to their sonographic topography.

  19. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  20. SU-E-J-13: Six Degree of Freedom Image Fusion Accuracy for Cranial Target Localization On the Varian Edge Stereotactic Radiosurgery System: Comparison Between 2D/3D and KV CBCT Image Registration

    SciTech Connect

    Xu, H; Song, K; Chetty, I; Kim, J; Wen, N

    2015-06-15

    Purpose: To determine the 6 degree of freedom systematic deviations between 2D/3D and CBCT image registration with various imaging setups and fusion algorithms on the Varian Edge Linac. Methods: An anthropomorphic head phantom with radio opaque targets embedded was scanned with CT slice thicknesses of 0.8, 1, 2, and 3mm. The 6 DOF systematic errors were assessed by comparing 2D/3D (kV/MV with CT) with 3D/3D (CBCT with CT) image registrations with different offset positions, similarity measures, image filters, and CBCT slice thicknesses (1 and 2 mm). The 2D/3D registration accuracy of 51 fractions for 26 cranial SRS patients was also evaluated by analyzing 2D/3D pre-treatment verification taken after 3D/3D image registrations. Results: The systematic deviations of 2D/3D image registration using kV- kV, MV-kV and MV-MV image pairs were within ±0.3mm and ±0.3° for translations and rotations with 95% confidence interval (CI) for a reference CT with 0.8 mm slice thickness. No significant difference (P>0.05) on target localization was observed between 0.8mm, 1mm, and 2mm CT slice thicknesses with CBCT slice thicknesses of 1mm and 2mm. With 3mm CT slice thickness, both 2D/3D and 3D/3D registrations performed less accurately in longitudinal direction than thinner CT slice thickness (0.60±0.12mm and 0.63±0.07mm off, respectively). Using content filter and using similarity measure of pattern intensity instead of mutual information, improved the 2D/3D registration accuracy significantly (P=0.02 and P=0.01, respectively). For the patient study, means and standard deviations of residual errors were 0.09±0.32mm, −0.22±0.51mm and −0.07±0.32mm in VRT, LNG and LAT directions, respectively, and 0.12°±0.46°, −0.12°±0.39° and 0.06°±0.28° in RTN, PITCH, and ROLL directions, respectively. 95% CI of translational and rotational deviations were comparable to those in phantom study. Conclusion: 2D/3D image registration provided on the Varian Edge radiosurgery, 6 DOF

  1. 3D Imaging of Density Gradients Using Plenoptic BOS

    NASA Astrophysics Data System (ADS)

    Klemkowsky, Jenna; Clifford, Chris; Fahringer, Timothy; Thurow, Brian

    2016-11-01

    The combination of background oriented schlieren (BOS) and a plenoptic camera, termed Plenoptic BOS, is explored through two proof-of-concept experiments. The motivation of this work is to provide a 3D technique capable of observing density disturbances. BOS uses the relationship between density and refractive index gradients to observe an apparent shift in a patterned background through image comparison. Conventional BOS systems acquire a single line-of-sight measurement, and require complex configurations to obtain 3D measurements, which are not always conducive to experimental facilities. Plenoptic BOS exploits the plenoptic camera's ability to generate multiple perspective views and refocused images from a single raw plenoptic image during post processing. Using such capabilities, with regards to BOS, provides multiple line-of-sight measurements of density disturbances, which can be collectively used to generate refocused BOS images. Such refocused images allow the position of density disturbances to be qualitatively and quantitatively determined. The image that provides the sharpest density gradient signature corresponds to a specific depth. These results offer motivation to advance Plenoptic BOS with an ultimate goal of reconstructing a 3D density field.

  2. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  3. Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations

    PubMed Central

    Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

    2008-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

  4. Defining the medial-lateral axis of an anatomical femur coordinate system using freehand 3D ultrasound imaging.

    PubMed

    Passmore, Elyse; Sangeux, Morgan

    2016-03-01

    Hip rotation from gait analysis informs clinical decisions regarding correction of femoral torsional deformities. However, it is among the least repeatable due to discrepancies in determining the medial-lateral axis of the femur. Conventional or functional calibration methods may be used to define the axis but there is no benchmark to evaluate these methods. Freehand 3D ultrasound, the coupling of ultrasound with 3D motion capture, may provide such a benchmark. We measured the accuracy in vitro and repeatability in vivo of determining the femur condylar axis from freehand 3D ultrasound. The condylar axis provided the reference medial-lateral axis of the femur and was used to evaluate one conventional method and three functional calibration methods, applied to three calibration movements. Ten healthy subjects (20 limbs) underwent 3D gait analysis and freehand 3D ultrasound. The functional calibration methods were a transformation technique, a geometrical method and a method that minimises variance of knee varus-valgus kinematics (DynaKAD). The conventional method used markers over the femoral epicondyles. The condylar axis determined by 3D ultrasound showed good accuracy in vitro, 1.6° (SD: 0.3°) and good repeatability in vivo, 0.2° (RSMD: 2.3°). The DynaKAD method applied to the walking calibration movement determined the medial-lateral axis closest to the ultrasound reference. The average angular difference in the transverse plane was 3.1° (SD: 6.1°). Freehand 3D ultrasound offers an accurate, non-invasive and relatively fast method to locate the medial-lateral axis of the femur for gait analysis.

  5. Development of a Hausdorff distance based 3D quantification technique to evaluate the CT imaging system impact on depiction of lesion morphology

    NASA Astrophysics Data System (ADS)

    Sahbaee, Pooyan; Robins, Marthony; Solomon, Justin; Samei, Ehsan

    2016-04-01

    The purpose of this study was to develop a 3D quantification technique to assess the impact of imaging system on depiction of lesion morphology. Regional Hausdorff Distance (RHD) was computed from two 3D volumes: virtual mesh models of synthetic nodules or "virtual nodules" and CT images of physical nodules or "physical nodules". The method can be described in following steps. First, the synthetic nodule was inserted into anthropomorphic Kyoto thorax phantom and scanned in a Siemens scanner (Flash). Then, nodule was segmented from the image. Second, in order to match the orientation of the nodule, the digital models of the "virtual" and "physical" nodules were both geometrically translated to the origin. Then, the "physical" was gradually rotated at incremental 10 degrees. Third, the Hausdorff Distance was calculated from each pair of "virtual" and "physical" nodules. The minimum HD value represented the most matching pair. Finally, the 3D RHD map and the distribution of RHD were computed for the matched pair. The technique was scalarized using the FWHM of the RHD distribution. The analysis was conducted for various shapes (spherical, lobular, elliptical, and speculated) of nodules. The calculated FWHM values of RHD distribution for the 8-mm spherical, lobular, elliptical, and speculated "virtual" and "physical" nodules were 0.23, 0.42, 0.33, and 0.49, respectively.

  6. Automation and Preclinical Evaluation of a Dedicated Emission Mammotomography System for Fully 3-D Molecular Breast Imaging

    DTIC Science & Technology

    2009-10-01

    volumetric shape. These MRI breast image sets can thus be used as the digital “ phantoms ” when utilizing computer models for system development and orbit...future breast phantom experiments should utilize ~700mL breast volumes as previous experiments in our lab have mainly used phantoms >1000mL in volume...sizes from this study retrospectively validate the range of shapes and sizes (250 to 1700 mL volumes) of custom shaped pendant breast phantoms

  7. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs.

  8. The occlusion-adjusted prefabricated 3D mirror image templates by computer simulation: the image-guided navigation system application in difficult cases of head and neck reconstruction.

    PubMed

    Cheng, Hsu-Tang; Wu, Chao-I; Tseng, Ching-Shiow; Chen, Hung-Chi; Lee, Wu-Song; Chen, Philip Kuo-Ting; Chang, Sophia Chia-Ning

    2009-11-01

    Computer applications in head and neck reconstruction are rapidly emerging and create not only a virtual environment for presurgical planning, but also help in image-guided navigational surgery. This study evaluates the use of prefabricated 3-dimensional (3D) mirror image templates made by computer-simulated adjusted occlusions to assist in microvascular prefabricated flap insertion during reconstructive surgery. Five patients underwent tumor ablation surgery in 1999 and survived for 8 years. Four of the patients with malignancy received radiation therapy. All patients in this study suffered from severe malocclusion causing trismus, headache, temporomandibular joint pain, an unsymmetrical face, and the inability of further osseointegrated teeth insertion. They underwent a 3D computer tomography examination and the nonprocessed raw data were sent for computer simulation in adjusting occlusion; thus, a mirror image template could be fabricated for microsurgical flap guidance. The computer simulated occlusion was acceptable and facial symmetry obtained. The use of the template resulted in a shorter operation time and recovery was as expected. The computer-simulated occlusion-adjusted 3D mirror image templates aid in the use of free vascularized bone flaps for restoring continuity to the mandible. The coordinated arch will help with further osseointegration teeth insertion.

  9. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  10. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yun, G. S.; Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  11. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system.

    PubMed

    Lee, J; Yun, G S; Lee, J E; Kim, M; Choi, M J; Lee, W; Park, H K; Domier, C W; Luhmann, N C; Sabbagh, S A; Park, Y S; Lee, S G; Bak, J G

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  12. [A new 2D and 3D imaging approach to musculoskeletal physiology and pathology with low-dose radiation and the standing position: the EOS system].

    PubMed

    Dubousset, Jean; Charpak, Georges; Dorion, Irène; Skalli, Wafa; Lavaste, François; Deguise, Jacques; Kalifa, Gabriel; Ferey, Solène

    2005-02-01

    Close collaboration between multidisciplinary specialists (physicists, biomecanical engineers, medical radiologists and pediatric orthopedic surgeons) has led to the development of a new low-dose radiation device named EOS. EOS has three main advantages: The use of a gaseous X-ray detector, invented by Georges Charpak (Nobel Prizewinner 1992), the dose necessary to obtain a 2D image of the skeletal system has been reduced by 8 to 10 times, while that required to obtain a 3D reconstruction from CT slices has fallen by a factor of 800 to 1000. The accuracy of the 3D reconstruction obtained with EOS is as good as that obtained with CT. The patient is examined in the standing (or seated) position, and is scanned simultaneously from head to feet, both frontally and laterally. This is a major advantage over conventional CT which requires the patient to be placed horizontally. -The 3D reconstructions of each element of the osteo-articular system are as precise as those obtained by conventional CT. EOS is also rapid, taking only 15 to 30 minutes to image the entire spine.

  13. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  14. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  15. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  16. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  17. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  18. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  19. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  20. A Modular and Affordable Time-Lapse Imaging and Incubation System Based on 3D-Printed Parts, a Smartphone, and Off-The-Shelf Electronics

    PubMed Central

    Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan

    2016-01-01

    Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modular and affordable time-lapse imaging and incubation system (ATLIS). The ATLIS enables the transformation of simple inverted microscopes into live cell imaging systems using custom-designed 3D-printed parts, a smartphone, and off-the-shelf electronic components. We demonstrate that the ATLIS provides stable environmental conditions to support normal cell behavior during live imaging experiments in both traditional and evaporation-sensitive microfluidic cell culture systems. Thus, the system presented here has the potential to increase the accessibility of time-lapse microscopy of living cells for the wider research community. PMID:28002463

  1. A Modular and Affordable Time-Lapse Imaging and Incubation System Based on 3D-Printed Parts, a Smartphone, and Off-The-Shelf Electronics.

    PubMed

    Hernández Vera, Rodrigo; Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan

    2016-01-01

    Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modular and affordable time-lapse imaging and incubation system (ATLIS). The ATLIS enables the transformation of simple inverted microscopes into live cell imaging systems using custom-designed 3D-printed parts, a smartphone, and off-the-shelf electronic components. We demonstrate that the ATLIS provides stable environmental conditions to support normal cell behavior during live imaging experiments in both traditional and evaporation-sensitive microfluidic cell culture systems. Thus, the system presented here has the potential to increase the accessibility of time-lapse microscopy of living cells for the wider research community.

  2. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  3. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  4. MO-DE-210-06: Development of a Supercompounded 3D Volumetric Ultrasound Image Guidance System for Prone Accelerated Partial Breast Irradiation (APBI)

    SciTech Connect

    Chiu, T; Hrycushko, B; Zhao, B; Jiang, S; Gu, X

    2015-06-15

    Purpose: For early-stage breast cancer, accelerated partial breast irradiation (APBI) is a cost-effective breast-conserving treatment. Irradiation in a prone position can mitigate respiratory induced breast movement and achieve maximal sparing of heart and lung tissues. However, accurate dose delivery is challenging due to breast deformation and lumpectomy cavity shrinkage. We propose a 3D volumetric ultrasound (US) image guidance system for accurate prone APBI Methods: The designed system, set beneath the prone breast board, consists of a water container, an US scanner, and a two-layer breast immobilization cup. The outer layer of the breast cup forms the inner wall of water container while the inner layer is attached to patient breast directly to immobilization. The US transducer scans is attached to the outer-layer of breast cup at the dent of water container. Rotational US scans in a transverse plane are achieved by simultaneously rotating water container and transducer, and multiple transverse scanning forms a 3D scan. A supercompounding-technique-based volumetric US reconstruction algorithm is developed for 3D image reconstruction. The performance of the designed system is evaluated with two custom-made gelatin phantoms containing several cylindrical inserts filled in with water (11% reflection coefficient between materials). One phantom is designed for positioning evaluation while the other is for scaling assessment. Results: In the positioning evaluation phantom, the central distances between the inserts are 15, 20, 30 and 40 mm. The distances on reconstructed images differ by −0.19, −0.65, −0.11 and −1.67 mm, respectively. In the scaling evaluation phantom, inserts are 12.7, 19.05, 25.40 and 31.75 mm in diameter. Measured inserts’ sizes on images differed by 0.23, 0.19, −0.1 and 0.22 mm, respectively. Conclusion: The phantom evaluation results show that the developed 3D volumetric US system can accurately localize target position and determine

  5. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  6. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  7. 3D-modeling of the spine using EOS imaging system: Inter-reader reproducibility and reliability

    PubMed Central

    Rehm, Johannes; Germann, Thomas; Akbar, Michael; Pepke, Wojciech; Kauczor, Hans-Ulrich; Weber, Marc-André; Spira, Daniel

    2017-01-01

    Objectives To retrospectively assess the interreader reproducibility and reliability of EOS 3D full spine reconstructions in patients with adolescent idiopathic scoliosis (AIS). Methods 73 patients with mean age of 17 years and a moderate AIS (median Cobb Angle 18.2°) obtained low-dose standing biplanar radiographs with EOS. Two independent readers performed “full spine” 3D reconstructions of the spine with the “full-spine” method adjusting the bone contour of every thoracic and lumbar vertebra (Th1-L5). Interreader reproducibility was assessed regarding rotation of every single vertebra in the coronal (i.e. frontal), sagittal (i.e. lateral), and axial plane, T1/T12 kyphosis, T4/T12 kyphosis, L1/L5 lordosis, L1/S1 lordosis and pelvic parameters. Radiation exposure, scan-time and 3D reconstruction time were recorded. Results Interclass correlation (ICC) ranged between 0.83 and 0.98 for frontal vertebral rotation, between 0.94 and 0.99 for lateral vertebral rotation and between 0.51 and 0.88 for axial vertebral rotation. ICC was 0.92 for T1/T12 kyphosis, 0.95 for T4/T12 kyphosis, 0.90 for L1/L5 lordosis, 0.85 for L1/S1 lordosis, 0.97 for pelvic incidence, 0.96 for sacral slope, 0.98 for sagittal pelvic tilt and 0.94 for lateral pelvic tilt. The mean time for reconstruction was 14.9 minutes (reader 1: 14.6 minutes, reader 2: 15.2 minutes, p<0.0001). The mean total absorbed dose was 593.4μGy ±212.3 per patient. Conclusion EOS “full spine” 3D angle measurement of vertebral rotation proved to be reliable and was performed in an acceptable reconstruction time. Interreader reproducibility of axial rotation was limited to some degree in the upper and middle thoracic spine due the obtuse angulation of the pedicles and the processi spinosi in the frontal view somewhat complicating their delineation. PMID:28152019

  8. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  9. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  10. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  11. Use of INSAT-3D sounder and imager radiances in the 4D-VAR data assimilation system and its implications in the analyses and forecasts

    NASA Astrophysics Data System (ADS)

    Indira Rani, S.; Taylor, Ruth; George, John P.; Rajagopal, E. N.

    2016-05-01

    INSAT-3D, the first Indian geostationary satellite with sounding capability, provides valuable information over India and the surrounding oceanic regions which are pivotal to Numerical Weather Prediction. In collaboration with UK Met Office, NCMRWF developed the assimilation capability of INSAT-3D Clear Sky Brightness Temperature (CSBT), both from the sounder and imager, in the 4D-Var assimilation system being used at NCMRWF. Out of the 18 sounder channels, radiances from 9 channels are selected for assimilation depending on relevance of the information in each channel. The first three high peaking channels, the CO2 absorption channels and the three water vapor channels (channel no. 10, 11, and 12) are assimilated both over land and Ocean, whereas the window channels (channel no. 6, 7, and 8) are assimilated only over the Ocean. Measured satellite radiances are compared with that from short range forecasts to monitor the data quality. This is based on the assumption that the observed satellite radiances are free from calibration errors and the short range forecast provided by NWP model is free from systematic errors. Innovations (Observation - Forecast) before and after the bias correction are indicative of how well the bias correction works. Since the biases vary with air-masses, time, scan angle and also due to instrument degradation, an accurate bias correction algorithm for the assimilation of INSAT-3D sounder radiance is important. This paper discusses the bias correction methods and other quality controls used for the selected INSAT-3D sounder channels and the impact of bias corrected radiance in the data assimilation system particularly over India and surrounding oceanic regions.

  12. A hybrid framework for 3D medical image segmentation.

    PubMed

    Chen, Ting; Metaxas, Dimitris

    2005-12-01

    In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.

  13. The Diagnostic Radiological Utilization Of 3-D Display Images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Dwyer, Samuel J.; Preston, David F.; Batnitzky, Solomon; Lee, Kyo R.

    1984-10-01

    In the practice of radiology, computer graphics systems have become an integral part of the use of computed tomography (CT), nuclear medicine (NM), magnetic resonance imaging (MRI), digital subtraction angiography (DSA) and ultrasound. Gray scale computerized display systems are used to display, manipulate, and record scans in all of these modalities. As the use of these imaging systems has spread, various applications involving digital image manipulation have also been widely accepted in the radiological community. We discuss one of the more esoteric of such applications, namely, the reconstruction of 3-D structures from plane section data, such as CT scans. Our technique is based on the acquisition of contour data from successive sections, the definition of the implicit surface defined by such contours, and the application of the appropriate computer graphics hardware and software to present reasonably pleasing pictures.

  14. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  15. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  16. 3D measurement system based on computer-generated gratings

    NASA Astrophysics Data System (ADS)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  17. 3D Multifunctional Ablative Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  18. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  19. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  20. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  1. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  2. Implementation of active-type Lamina 3D display system.

    PubMed

    Yoon, Sangcheol; Baek, Hogil; Min, Sung-Wook; Park, Soon-Gi; Park, Min-Kyu; Yoo, Seong-Hyeon; Kim, Hak-Rin; Lee, Byoungho

    2015-06-15

    Lamina 3D display is a new type of multi-layer 3D display, which utilizes the polarization state as a new dimension of depth information. Lamina 3D display system has advanced properties - to reduce the data amount representing 3D image, to be easily made using the conventional projectors, and to have a potential being applied to the many applications. However, the system might have some limitations in depth range and viewing angle due to the properties of the expressive volume components. In this paper, we propose the volume using the layers of switchable diffusers to implement the active-type Lamina 3D display system. Because the diffusing rate of the layers has no relation with the polarization state, the polarizer wheel is applied to the proposed system in purpose of making the sectioned image synchronized with the diffusing layer at the designated location. The imaging volume of the proposed system consists of five layers of polymer dispersed liquid crystal and the total size of the implemented volume is 24x18x12 mm3(3). The proposed system can achieve the improvements of viewing qualities such as enhanced depth expression and widened viewing angle.

  3. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  4. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  5. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  6. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  7. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  8. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    PubMed

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm.

  9. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    PubMed

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements <2 mm) and precision (mean point to plane error <2 mm) at an average resolution of at least 390 points per cm(2). Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p < 0.001). The choice of object color can influence measurement range and precision. Although Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  10. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    DTIC Science & Technology

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  11. Morphometrics, 3D Imaging, and Craniofacial Development

    PubMed Central

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  12. Optimizing and Evaluating an Integrated SPECT-CmT System Dedicated to Improved 3-D Breast Cancer Imaging

    DTIC Science & Technology

    2010-05-01

    measured using radiochromic film, calibrated using thermoluminescent detectors and an ion chamber. The dose absorbed during a scan is initially measured...Optimize tradeoffs between absorbed dose and image quality for the CmT subsystem Task 1(a) Embed thermoluminescent dosimeters or available MOSFET... thermoluminescent detectors (TLDs) were used in the experiments but, after consultation with experts in the field of radiation dosimetry, it was decided

  13. Automation and Preclinical Evaluation of a Dedicated Emission Mammotomography System for Fully 3-D Molecular Breast Imaging

    DTIC Science & Technology

    2007-10-01

    using a large two headed gamma camera, in order to look for drainage into the lymph node. If drainage appears, the node is marked and the woman has the...lymphoscintigraphy) prior to lymph node surgery. The technologist injects a radionuclide directly into the breast tumor and then images the patient... lymph node removed with gamma-probe guided surgery to help prevent further spreading of the cancer. I am scheduled to observe in the operation room

  14. 3D Imaging of the OH mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Kouahla, M. N.; Moreels, G.; Faivre, M.; Clairemidi, J.; Meriwether, J. W.; Lehmacher, G. A.; Vidal, E.; Veliz, O.

    2010-01-01

    A new and original stereo imaging method is introduced to measure the altitude of the OH nightglow layer and provide a 3D perspective map of the altitude of the layer centroid. Near-IR photographs of the OH layer are taken at two sites separated by a 645 km distance. Each photograph is processed in order to provide a satellite view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized cross-correlation coefficient (NCC). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12°09‧08.2″ S, 75°33‧49.3″ W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16°33‧17.6″ S, 71°39‧59.4″ W, altitude 2272 m) close to Arequipa. 3D maps of the layer surface were retrieved and compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 86.3 km on July 26. Comparable relief wavy features appear in the 3D and intensity maps. It is shown that the vertical amplitude of the wave system varies as exp (Δz/2H) within the altitude range Δz = 83.5-88.0 km, H being the scale height. The oscillatory kinetic energy at the altitude of the OH layer is comprised between 3 × 10-4 and 5.4 × 10-4 J/m3, which is 2-3 times smaller than the values derived from partial radio wave at 52°N latitude.

  15. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  16. 3D flash lidar imager onboard UAV

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Liu, Yilong; Yang, Jiazhi; Zhang, Rongting; Su, Chengjie; Shi, Yujun; Zhou, Xiang

    2014-11-01

    A new generation of flash LiDAR sensor called GLidar-I is presented in this paper. The GLidar-I has been being developed by Guilin University of Technology in cooperating with the Guilin Institute of Optical Communications. The GLidar-I consists of control and process system, transmitting system and receiving system. Each of components has been designed and implemented. The test, experiments and validation for each component have been conducted. The experimental results demonstrate that the researched and developed GLiDAR-I can effectively measure the distance about 13 m at the accuracy level about 11cm in lab.

  17. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  18. 3D Whole Heart Imaging for Congenital Heart Disease

    PubMed Central

    Greil, Gerald; Tandon, Animesh (Aashoo); Silva Vieira, Miguel; Hussain, Tarique

    2017-01-01

    Three-dimensional (3D) whole heart techniques form a cornerstone in cardiovascular magnetic resonance imaging of congenital heart disease (CHD). It offers significant advantages over other CHD imaging modalities and techniques: no ionizing radiation; ability to be run free-breathing; ECG-gated dual-phase imaging for accurate measurements and tissue properties estimation; and higher signal-to-noise ratio and isotropic voxel resolution for multiplanar reformatting assessment. However, there are limitations, such as potentially long acquisition times with image quality degradation. Recent advances in and current applications of 3D whole heart imaging in CHD are detailed, as well as future directions. PMID:28289674

  19. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  20. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  1. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  2. A colour image reproduction framework for 3D colour printing

    NASA Astrophysics Data System (ADS)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  3. 3-D imaging and illustration of mouse intestinal neurovascular complex.

    PubMed

    Fu, Ya-Yuan; Peng, Shih-Jung; Lin, Hsin-Yao; Pasricha, Pankaj J; Tang, Shiue-Cheng

    2013-01-01

    Because of the dispersed nature of nerves and blood vessels, standard histology cannot provide a global and associated observation of the enteric nervous system (ENS) and vascular network. We prepared transparent mouse intestine and combined vessel painting and three-dimensional (3-D) neurohistology for joint visualization of the ENS and vasculature. Cardiac perfusion of the fluorescent wheat germ agglutinin (vessel painting) was used to label the ileal blood vessels. The pan-neuronal marker PGP9.5, sympathetic neuronal marker tyrosine hydroxylase (TH), serotonin, and glial markers S100B and GFAP were used as the immunostaining targets of neural tissues. The fluorescently labeled specimens were immersed in the optical clearing solution to improve photon penetration for 3-D confocal microscopy. Notably, we simultaneously revealed the ileal microstructure, vasculature, and innervation with micrometer-level resolution. Four examples are given: 1) the morphology of the TH-labeled sympathetic nerves: sparse in epithelium, perivascular at the submucosa, and intraganglionic at myenteric plexus; 2) distinct patterns of the extrinsic perivascular and intrinsic pericryptic innervation at the submucosal-mucosal interface; 3) different associations of serotonin cells with the mucosal neurovascular elements in the villi and crypts; and 4) the periganglionic capillary network at the myenteric plexus and its contact with glial fibers. Our 3-D imaging approach provides a useful tool to simultaneously reveal the nerves and blood vessels in a space continuum for panoramic illustration and analysis of the neurovascular complex to better understand the intestinal physiology and diseases.

  4. Stereoscopic contents authoring system for 3D DMB data service

    NASA Astrophysics Data System (ADS)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  5. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    PubMed Central

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  6. Digital holography and 3D imaging: introduction to feature issue.

    PubMed

    Kim, Myung K; Hayasaki, Yoshio; Picart, Pascal; Rosen, Joseph

    2013-01-01

    This feature issue of Applied Optics on Digital Holography and 3D Imaging is the sixth of an approximately annual series. Forty-seven papers are presented, covering a wide range of topics in phase-shifting methods, low coherence methods, particle analysis, biomedical imaging, computer-generated holograms, integral imaging, and many others.

  7. Using Single-Camera 3-D Imaging to Guide Material Handling Robots in a Nuclear Waste Package Closure System

    SciTech Connect

    Rodney M. Shurtliff

    2005-09-01

    Nuclear reactors for generating energy and conducting research have been in operation for more than 50 years, and spent nuclear fuel and associated high-level waste have accumulated in temporary storage. Preparing this spent fuel and nuclear waste for safe and permanent storage in a geological repository involves developing a robotic packaging system—a system that can accommodate waste packages of various sizes and high levels of nuclear radiation. During repository operation, commercial and government-owned spent nuclear fuel and high-level waste will be loaded into casks and shipped to the repository, where these materials will be transferred from the casks into a waste package, sealed, and placed into an underground facility. The waste packages range from 12 to 20 feet in height and four and a half to seven feet in diameter. Closure operations include sealing the waste package and all its associated functions, such as welding lids onto the container, filling the inner container with an inert gas, performing nondestructive examinations on welds, and conducting stress mitigation. The Idaho National Laboratory is designing and constructing a prototype Waste Package Closure System (WPCS). Control of the automated material handling is an important part of the overall design. Waste package lids, welding equipment, and other tools must be moved in and around the closure cell during the closure process. These objects are typically moved from tool racks to a specific position on the waste package to perform a specific function. Periodically, these objects are moved from a tool rack or the waste package to the adjacent glovebox for repair or maintenance. Locating and attaching to these objects with the remote handling system, a gantry robot, in a loosely fixtured environment is necessary for the operation of the closure cell. Reliably directing the remote handling system to pick and place the closure cell equipment within the cell is the major challenge.

  8. 3D localized 2D ultrafast J-resolved magnetic resonance spectroscopy: in vitro study on a 7 T imaging system.

    PubMed

    Roussel, T; Giraudeau, P; Ratiney, H; Akoka, S; Cavassila, S

    2012-02-01

    2D Magnetic Resonance Spectroscopy (MRS) is a well known tool for the analysis of complicated and overlapped MR spectra and was therefore originally used for structural analysis. It also presents a potential for biomedical applications as shown by an increasing number of works related to localized in vivo experiments. However, 2D MRS suffers from long acquisition times due to the necessary collection of numerous increments in the indirect dimension (t(1)). This paper presents the first 3D localized 2D ultrafast J-resolved MRS sequence, developed on a small animal imaging system, allowing the acquisition of a 3D localized 2D J-resolved MRS spectrum in a single scan. Sequence parameters were optimized regarding Signal-to-Noise ratio and spectral resolution. Sensitivity and spatial localization properties were characterized and discussed. An automatic post-processing method allowing the reduction of artifacts inherent to ultrafast excitation is also presented. This sequence offers an efficient signal localization and shows a great potential for in vivo dynamic spectroscopy.

  9. Alignment issues, correlation techniques and their assessment for a visible light imaging-based 3D printer quality control system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2016-05-01

    Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.

  10. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  11. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  12. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-06

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined.

  13. DCT and DST Based Image Compression for 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  14. Bi-sided integral imaging with 2D/3D convertibility using scattering polarizer.

    PubMed

    Yeom, Jiwoon; Hong, Keehoon; Park, Soon-gi; Hong, Jisoo; Min, Sung-Wook; Lee, Byoungho

    2013-12-16

    We propose a two-dimensional (2D) and three-dimensional (3D) convertible bi-sided integral imaging. The proposed system uses the polarization state of projected light for switching its operation mode between 2D and 3D modes. By using an optical module composed of two scattering polarizers and one linear polarizer, the proposed integral imaging system simultaneously provides 3D images with 2D background images for observers who are located in the front and the rear sides of the system. The occlusion effect between 2D images and 3D images is realized by using a compensation mask for 2D images and the elemental images. The principle of proposed system is experimentally verified.

  15. 3D Subharmonic Ultrasound Imaging In Vitro and In Vivo

    PubMed Central

    Eisenbrey, John R.; Sridharan, Anush; Machado, Priscilla; Zhao, Hongjia; Halldorsdottir, Valgerdur G.; Dave, Jaydev K.; Liu, Ji-Bin; Park, Suhyun; Dianis, Scott; Wallace, Kirk; Thomenius, Kai E.; Forsberg, F.

    2012-01-01

    Rationale and Objectives While contrast-enhanced ultrasound imaging techniques such as harmonic imaging (HI) have evolved to reduce tissue signals using the nonlinear properties of the contrast agent, levels of background suppression have been mixed. Subharmonic imaging (SHI) offers near-complete tissue suppression by centering the receive bandwidth at half the transmitting frequency. In this work we demonstrate the feasibility of 3D SHI and compare it to 3D HI. Materials and Methods 3D HI and SHI were implemented on a Logiq 9 ultrasound scanner (GE Healthcare, Milwaukee, Wisconsin) with a 4D10L probe. Four-cycle SHI was implemented to transmit at 5.8 MHz and receive at 2.9 MHz, while 2-cycle HI was implemented to transmit at 5 MHz and receive at 10 MHz. The ultrasound contrast agent Definity (Lantheus Medical Imaging, North Billerica, MA) was imaged within a flow phantom and the lower pole of two canine kidneys in both HI and SHI modes. Contrast to tissue ratios (CTR) and rendered images were compared offline. Results SHI resulted in significant improvement in CTR levels relative to HI both in vitro (12.11±0.52 vs. 2.67±0.77, p<0.001) and in vivo (5.74±1.92 vs. 2.40±0.48, p=0.04). Rendered 3D SHI images provided better tissue suppression and a greater overall view of vessels in a flow phantom and canine renal vasculature. Conclusions The successful implementation of SHI in 3D allows imaging of vascular networks over a heterogeneous sample volume and should improve future diagnostic accuracy. Additionally, 3D SHI provides improved CTR values relative to 3D HI. PMID:22464198

  16. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  17. Experiments on terahertz 3D scanning microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  18. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  19. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment.

  20. Frames-Based Denoising in 3D Confocal Microscopy Imaging.

    PubMed

    Konstantinidis, Ioannis; Santamaria-Pang, Alberto; Kakadiaris, Ioannis

    2005-01-01

    In this paper, we propose a novel denoising method for 3D confocal microscopy data based on robust edge detection. Our approach relies on the construction of a non-separable frame system in 3D that incorporates the Sobel operator in dual spatial directions. This multidirectional set of digital filters is capable of robustly detecting edge information by ensemble thresholding of the filtered data. We demonstrate the application of our method to both synthetic and real confocal microscopy data by comparing it to denoising methods based on separable 3D wavelets and 3D median filtering, and report very encouraging results.

  1. Exposing digital image forgeries by 3D reconstruction technology

    NASA Astrophysics Data System (ADS)

    Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei

    2009-11-01

    Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.

  2. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  3. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    PubMed

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  4. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  5. Free segmentation in rendered 3D images through synthetic impulse response in integral imaging

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, M.; Llavador, A.; Sánchez-Ortiga, E.; Saavedra, G.; Javidi, B.

    2016-06-01

    Integral Imaging is a technique that has the capability of providing not only the spatial, but also the angular information of three-dimensional (3D) scenes. Some important applications are the 3D display and digital post-processing as for example, depth-reconstruction from integral images. In this contribution we propose a new reconstruction method that takes into account the integral image and a simplified version of the impulse response function (IRF) of the integral imaging (InI) system to perform a two-dimensional (2D) deconvolution. The IRF of an InI system has a periodic structure that depends directly on the axial position of the object. Considering different periods of the IRFs we recover by deconvolution the depth information of the 3D scene. An advantage of our method is that it is possible to obtain nonconventional reconstructions by considering alternative synthetic impulse responses. Our experiments show the feasibility of the proposed method.

  6. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  7. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  8. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  9. 3D Image Reconstruction: Determination of Pattern Orientation

    SciTech Connect

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  10. Accuracy of 3D Imaging Software in Cephalometric Analysis

    DTIC Science & Technology

    2013-06-21

    Imaging and Communication in Medicine ( DICOM ) files into personal computer-based software to enable 3D reconstruction of the craniofacial skeleton. These...tissue profile. CBCT data can be imported as DICOM files into personal computer–based software to provide 3D reconstruction of the craniofacial...been acquired for the three pig models. The CBCT data were exported into DICOM multi-file format. They will be imported into a proprietary

  11. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  12. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  13. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  14. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  15. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    DTIC Science & Technology

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  16. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  17. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  18. Complex Resistivity 3D Imaging for Ground Reinforcement Site

    NASA Astrophysics Data System (ADS)

    Son, J.; Kim, J.; Park, S.

    2012-12-01

    Induced polarization (IP) method is used for mineral exploration and generally classified into two categories, time and frequency domain method. IP method in frequency domain measures amplitude and absolute phase to the transmitted currents, and is often called spectral induced polarization (SIP) when measurement is made for the wide-band frequencies. Our research group has been studying the modeling and inversion algorithms of complex resistivity method since several years ago and recently started to apply this method for various field applications. We already completed the development of 2/3D modeling and inversion program and developing another algorithm to use wide-band data altogether. Until now complex resistivity (CR) method was mainly used for the surface or tomographic survey of mineral exploration. Through the experience, we can find that the resistivity section from CR method is very similar with that of conventional resistivity method. Interpretation of the phase section is generally well matched with the geological information of survey area. But because most of survey area has very touch and complex terrain, 2D survey and interpretation are used generally. In this study, the case study of 3D CR survey conducted for the site where ground reinforcement was done to prevent the subsidence will be introduced. Data was acquired with the Zeta system, the complex resistivity measurement system produced by Zonge Co. using 8 frequencies from 0.125 to 16 Hz. 2D survey was conducted for total 6 lines with 5 m dipole spacing and 20 electrodes. Line length is 95 meter for every line. Among these 8 frequency data, data below 1 Hz was used considering its quality. With the 6 line data, 3D inversion was conducted. Firstly 2D interpretation was made with acquired data and its results were compared with those of resistivity survey. Resulting resistivity image sections of CR and resistivity method were very similar. Anomalies in phase image section showed good agreement

  19. Photon counting passive 3D image sensing for automatic target recognition.

    PubMed

    Yeom, Seokwon; Javidi, Bahram; Watson, Edward

    2005-11-14

    In this paper, we propose photon counting three-dimensional (3D) passive sensing and object recognition using integral imaging. The application of this approach to 3D automatic target recognition (ATR) is investigated using both linear and nonlinear matched filters. We find there is significant potential of the proposed system for 3D sensing and recognition with a low number of photons. The discrimination capability of the proposed system is quantified in terms of discrimination ratio, Fisher ratio, and receiver operating characteristic (ROC) curves. To the best of our knowledge, this is the first report on photon counting 3D passive sensing and ATR with integral imaging.

  20. Fast 3-d tomographic microwave imaging for breast cancer detection.

    PubMed

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

  1. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  2. 3D quantitative analysis of brain SPECT images

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Ceskovic, Ivan; Petrovic, Ratimir; Loncaric, Srecko

    2001-07-01

    The main purpose of this work is to develop a computer-based technique for quantitative analysis of 3-D brain images obtained by single photon emission computed tomography (SPECT). In particular, the volume and location of ischemic lesion and penumbra is important for early diagnosis and treatment of infracted regions of the brain. SPECT imaging is typically used as diagnostic tool to assess the size and location of the ischemic lesion. The segmentation method presented in this paper utilizes a 3-D deformable model in order to determine size and location of the regions of interest. The evolution of the model is computed using a level-set implementation of the algorithm. In addition to 3-D deformable model the method utilizes edge detection and region growing for realization of a pre-processing. Initial experimental results have shown that the method is useful for SPECT image analysis.

  3. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  4. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  5. Episcopic 3D Imaging Methods: Tools for Researching Gene Function

    PubMed Central

    Weninger, Wolfgang J; Geyer, Stefan H

    2008-01-01

    This work aims at describing episcopic 3D imaging methods and at discussing how these methods can contribute to researching the genetic mechanisms driving embryogenesis and tissue remodelling, and the genesis of pathologies. Several episcopic 3D imaging methods exist. The most advanced are capable of generating high-resolution volume data (voxel sizes from 0.5x0.5x1 µm upwards) of small to large embryos of model organisms and tissue samples. Beside anatomy and tissue architecture, gene expression and gene product patterns can be three dimensionally analyzed in their precise anatomical and histological context with the aid of whole mount in situ hybridization or whole mount immunohistochemical staining techniques. Episcopic 3D imaging techniques were and are employed for analyzing the precise morphological phenotype of experimentally malformed, randomly produced, or genetically engineered embryos of biomedical model organisms. It has been shown that episcopic 3D imaging also fits for describing the spatial distribution of genes and gene products during embryogenesis, and that it can be used for analyzing tissue samples of adult model animals and humans. The latter offers the possibility to use episcopic 3D imaging techniques for researching the causality and treatment of pathologies or for staging cancer. Such applications, however, are not yet routine and currently only preliminary results are available. We conclude that, although episcopic 3D imaging is in its very beginnings, it represents an upcoming methodology, which in short terms will become an indispensable tool for researching the genetic regulation of embryo development as well as the genesis of malformations and diseases. PMID:19452045

  6. 3D x-ray reconstruction using lightfield imaging

    NASA Astrophysics Data System (ADS)

    Saha, Sajib; Tahtali, Murat; Lambert, Andrew; Pickering, Mark R.

    2014-09-01

    Existing Computed Tomography (CT) systems require full 360° rotation projections. Using the principles of lightfield imaging, only 4 projections under ideal conditions can be sufficient when the object is illuminated with multiple-point Xray sources. The concept was presented in a previous work with synthetically sampled data from a synthetic phantom. Application to real data requires precise calibration of the physical set up. This current work presents the calibration procedures along with experimental findings for the reconstruction of a physical 3D phantom consisting of simple geometric shapes. The crucial part of this process is to determine the effective distances of the X-ray paths, which are not possible or very difficult by direct measurements. Instead, they are calculated by tracking the positions of fiducial markers under prescribed source and object movements. Iterative algorithms are used for the reconstruction. Customized backprojection is used to ensure better initial guess for the iterative algorithms to start with.

  7. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  8. Image quality enhancement and computation acceleration of 3D holographic display using a symmetrical 3D GS algorithm.

    PubMed

    Zhou, Pengcheng; Bi, Yong; Sun, Minyuan; Wang, Hao; Li, Fang; Qi, Yan

    2014-09-20

    The 3D Gerchberg-Saxton (GS) algorithm can be used to compute a computer-generated hologram (CGH) to produce a 3D holographic display. But, using the 3D GS method, there exists a serious distortion in reconstructions of binary input images. We have eliminated the distortion and improved the image quality of the reconstructions by a maximum of 486%, using a symmetrical 3D GS algorithm that is developed based on a traditional 3D GS algorithm. In addition, the hologram computation speed has been accelerated by 9.28 times, which is significant for real-time holographic displays.

  9. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  10. Venus Topography in 3D: Imaging of Coronae and Chasmata

    NASA Astrophysics Data System (ADS)

    Jurdy, D. M.; Stefanick, M.; Stoddard, P. R.

    2006-12-01

    Venus' surface hosts hundreds of circular to elongate features, ranging from 60-2600 km, and averaging somewhat over 200 km, in diameter. These enigmatic structures have been classified as "coronae" and attributed to either tectono-volcanic or impact-related mechanisms. A linear to arcuate system of chasmata - rugged zones with some of Venus' deepest troughs, extend 1000's of kilometers. They have extreme relief, with elevations changing as much as 7 km in just 30 km distance. The 54,464 km-long Venus chasmata system defined in great detail by Magellan can be fit by great circle arcs at the 89.6% level, and when corrected for the smaller size of the planet, the total length of the chasmata system measures within 2.7% of the length of Earth's spreading ridges. The relatively young Beta-Atla-Themis region (BAT), within 30° of the equator from 180-300° longitude has the planet's strongest geoid highs and profuse volcanism. This BAT region, the intersection of three rift zones, also has a high coronal concentration, with individual coronae closely associated with the chasmata system. The chasmata with the greatest relief on Venus show linear rifting that prevailed in the latest stage of tectonic deformation. For a three-dimensional view of Venus' surface, we spread out the Magellan topography on a flat surface using a Mercator projection to preserve shape. Next we illuminate the surface with beams at angle 45° from left (or right) so as to simulate mid afternoon (or mid-morning). Finally, we observe the surface with two eyes looking through orange and azure colored filters respectively. This gives a 3D view of tectonic features in the BAT area. The 3D images clearly show coronae sharing boundaries with the chasmata. This suggests that the processes of rifting and corona-formation occur together. It seems unlikely that impact craters would create this pattern.

  11. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer.

  12. Realization of an aerial 3D image that occludes the background scenery.

    PubMed

    Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

    2014-10-06

    In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer.

  13. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  14. Reducing the influence of direct reflection on return signal detection in a 3D imaging lidar system by rotating the polarizing beam splitter.

    PubMed

    Wang, Chunhui; Lee, Xiaobao; Cui, Tianxiang; Qu, Yang; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-03-01

    The direction rule of the laser beam traveling through a deflected polarizing beam splitter (PBS) cube is derived. It reveals that, due to the influence of end-face reflection of the PBS at the detector side, the emergent beam coming from the incident beam parallels the direction of the original case without rotation, with only a very small translation interval between them. The formula of the translation interval is also given. Meanwhile, the emergent beam from the return signal at the detector side deflects at an angle twice that of the PBS rotation angle. The correctness has been verified by an experiment. The intensity transmittance of the emergent beam when propagating in the PBS is changes very little if the rotation angle is less than 35 deg. In a 3D imaging lidar system, by rotating the PBS cube by an angle, the direction of the return signal optical axis is separated from that of the origin, which can decrease or eliminate the influence of direct reflection caused by the prism end face on target return signal detection. This has been checked by experiment.

  15. Advanced 3D polarimetric flash ladar imaging through foliage

    NASA Astrophysics Data System (ADS)

    Murray, James T.; Moran, Steven E.; Roddier, Nicolas; Vercillo, Richard; Bridges, Robert; Austin, William

    2003-08-01

    High-resolution three-dimensional flash ladar system technologies are under development that enables remote identification of vehicles and armament hidden by heavy tree canopies. We have developed a sensor architecture and design that employs a 3D flash ladar receiver to address this mission. The receiver captures 128×128×>30 three-dimensional images for each laser pulse fired. The voxel size of the image is 3"×3"×4" at the target location. A novel signal-processing algorithm has been developed that achieves sub-voxel (sub-inch) range precision estimates of target locations within each pixel. Polarization discrimination is implemented to augment the target-to-foliage contrast. When employed, this method improves the range resolution of the system beyond the classical limit (based on pulsewidth and detection bandwidth). Experiments were performed with a 6 ns long transmitter pulsewidth that demonstrate 1-inch range resolution of a tank-like target that is occluded by foliage and a range precision of 0.3" for unoccluded targets.

  16. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  17. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  18. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  19. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  20. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  1. Noninvasive computational imaging of cardiac electrophysiology for 3-D infarct.

    PubMed

    Wang, Linwei; Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng

    2011-04-01

    Myocardial infarction (MI) creates electrophysiologically altered substrates that are responsible for ventricular arrhythmias, such as tachycardia and fibrillation. The presence, size, location, and composition of infarct scar bear significant prognostic and therapeutic implications for individual subjects. We have developed a statistical physiological model-constrained framework that uses noninvasive body-surface-potential data and tomographic images to estimate subject-specific transmembrane-potential (TMP) dynamics inside the 3-D myocardium. In this paper, we adapt this framework for the purpose of noninvasive imaging, detection, and quantification of 3-D scar mass for postMI patients: the framework requires no prior knowledge of MI and converges to final subject-specific TMP estimates after several passes of estimation with intermediate feedback; based on the primary features of the estimated spatiotemporal TMP dynamics, we provide 3-D imaging of scar tissue and quantitative evaluation of scar location and extent. Phantom experiments were performed on a computational model of realistic heart-torso geometry, considering 87 transmural infarct scars of different sizes and locations inside the myocardium, and 12 compact infarct scars (extent between 10% and 30%) at different transmural depths. Real-data experiments were carried out on BSP and magnetic resonance imaging (MRI) data from four postMI patients, validated by gold standards and existing results. This framework shows unique advantage of noninvasive, quantitative, computational imaging of subject-specific TMP dynamics and infarct mass of the 3-D myocardium, with the potential to reflect details in the spatial structure and tissue composition/heterogeneity of 3-D infarct scar.

  2. Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization.

    PubMed

    Sato, Y; Nakamoto, M; Tamaki, Y; Sasama, T; Sakita, I; Nakajima, Y; Monden, M; Tamura, S

    1998-10-01

    This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data.

  3. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  4. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  5. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  6. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  7. Subjective evaluation of a 3D videoconferencing system

    NASA Astrophysics Data System (ADS)

    Rizek, Hadi; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Johanson, Mathias

    2014-03-01

    A shortcoming of traditional videoconferencing systems is that they present the user with a flat, two-dimensional image of the remote participants. Recent advances in autostereoscopic display technology now make it possible to develop video conferencing systems supporting true binocular depth perception. In this paper, we present a subjective evaluation of a prototype multiview autostereoscopic video conferencing system and suggest a number of possible improvements based on the results. Whereas methods for subjective evaluation of traditional 2D videoconferencing systems are well established, the introduction of 3D requires an extension of the test procedures to assess the quality of depth perception. For this purpose, two depth-based test tasks have been designed and experiments have been conducted with test subjects comparing the 3D system to a conventional 2D video conferencing system. The outcome of the experiments show that the perception of depth is significantly improved in the 3D system, but the overall quality of experience is higher in the 2D system.

  8. Geodetic imaging of potential seismogenic asperities on the Xianshuihe-Anninghe-Zemuhe fault system, southwest China, with a new 3-D viscoelastic interseismic coupling model

    NASA Astrophysics Data System (ADS)

    Jiang, Guoyan; Xu, Xiwei; Chen, Guihua; Liu, Yajing; Fukahata, Yukitoshi; Wang, Hua; Yu, Guihua; Tan, Xibin; Xu, Caijun

    2015-03-01

    We use GPS and interferometric synthetic aperture radar (InSAR) measurements to image the spatial variation of interseismic coupling on the Xianshuihe-Anninghe-Zemuhe (XAZ) fault system. A new 3-D viscoelastic interseismic deformation model is developed to infer the rotation and strain rates of blocks, postseismic viscoelastic relaxation, and interseismic slip deficit on the fault surface discretized with triangular dislocation patches. The inversions of synthetic data show that the optimal weight ratio and smoothing factor are both 1. The successive joint inversions of geodetic data with different viscosities reveal six potential fully coupled asperities on the XAZ fault system. Among them, the potential asperity between Shimian and Mianning, which does not exist in the case of 1019 Pa s, is confirmed by the published microearthquake depth profile. Besides, there is another potential partially coupled asperity between Daofu and Kangding with a length scale up to 140 km. All these asperity sizes are larger than the minimum resolvable wavelength. The minimum and maximum slip deficit rates near the Moxi town are 7.0 and 12.7 mm/yr, respectively. Different viscosities have little influence on the roughness of the slip deficit rate distribution and the fitting residuals, which probably suggests that our observations cannot provide a good constraint on the viscosity of the middle lower crust. The calculation of seismic moment accumulation on each segment indicates that the Songlinkou-Selaha (S4), Shimian-Mianning (S7), and Mianning-Xichang (S8) segments are very close to the rupture of characteristic earthquakes. However, the confidence level is confined by sparse near-fault observations.

  9. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  10. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging

    PubMed Central

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-01-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm. PMID:27375935

  11. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  12. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    PubMed

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  13. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  14. 3-D Deep Penetration Photoacoustic Imaging with a 2-D CMUT Array.

    PubMed

    Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O; Zhuang, Xuefeng; Gambhir, Sanjiv S; Jeffrey, R Brooke; Khuri-Yakub, Butrus T

    2010-10-11

    In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging.

  15. 3D imaging of the mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Faivre, Michael; Moreels, Guy; Clairemidi, Jacques; Mougin-Sisini, Davy; Meriwether, John W.; Lehmacher, Gerald A.; Vidal, Erick; Veliz, Oskar

    A new and original stereo-imaging method is introduced to measure the altitude of the OH airglow layer and provide a 3D map of the altitude of the layer centroid. Near-IR photographs of the layer are taken at two sites distant of 645 km. Each photograph is processed in order to invert the perspective effect and provide a satellite-type view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized crosscorrelation coefficient. This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12° 09' 08.2" S, 75° 33' 49.3" W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16° 33' 17.6" S, 71° 39' 59.4" W, altitude 2330 m) close to Arequipa. 3D maps of the layer surface are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 87.1 km on July 26 and 89.5 km on July 28. Comparable relief wavy features appear in the 3D and intensity maps.

  16. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  17. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere.

  18. Handheld camera 3D modeling system using multiple reference panels

    NASA Astrophysics Data System (ADS)

    Fujimura, Kouta; Oue, Yasuhiro; Terauchi, Tomoya; Emi, Tetsuichi

    2002-03-01

    A novel 3D modeling system in which a target object is easily captured and modeled by using a hand-held camera with several reference panels is presented in this paper. The reference panels are designed to be able to obtain the camera position and discriminate between each other. A conventional 3D modeling system using a reference panel has several restrictions regarding the target object, specifically the size and its location. Our system uses multiple reference panels, which are set around the target object to remove these restrictions. The main features of this system are as follows: 1) The whole shape and photo-realistic textures of the target object can be digitized based on several still images or a movie captured by using a hand-held camera; as well as each location of the camera that can be calculated using the reference panels. 2) Our system can be provided as a software product only. That means there are no special requirements for hardware; even the reference panels , because they can be printed from image files or software. 3) This system can be applied to digitize a larger object. In the experiments, we developed and used an interactive region selection tool to detect the silhouette on each image instead of using the chroma -keying method. We have tested our system with a toy object. The calculation time is about 10 minutes (except for the capturing the images and extracting the silhouette by using our tool) on a personal computer with a Pentium-III processor (600MHz) and 320MB memory. However, it depends on how complex the images are and how many images you use. Our future plan is to evaluate the system with various kind of objects, specifically, large ones in outdoor environments.

  19. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  20. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  1. Validation of 3D ultrasound: CT registration of prostate images

    NASA Astrophysics Data System (ADS)

    Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

    2003-05-01

    All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

  2. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes.

  3. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  4. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  5. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  6. Joint calibration of 3D resist image and CDSEM

    NASA Astrophysics Data System (ADS)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  7. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  8. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  9. Automated spatial alignment of 3D torso images.

    PubMed

    Bose, Arijit; Shah, Shishir K; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2011-01-01

    This paper describes an algorithm for automated spatial alignment of three-dimensional (3D) surface images in order to achieve a pre-defined orientation. Surface images of the torso are acquired from breast cancer patients undergoing reconstructive surgery to facilitate objective evaluation of breast morphology pre-operatively (for treatment planning) and/or post-operatively (for outcome assessment). Based on the viewing angle of the multiple cameras used for stereophotography, the orientation of the acquired torso in the images may vary from the normal upright position. Consequently, when translating this data into a standard 3D framework for visualization and analysis, the co-ordinate geometry differs from the upright position making robust and standardized comparison of images impractical. Moreover, manual manipulation and navigation of images to the desired upright position is subject to user bias. Automating the process of alignment and orientation removes operator bias and permits robust and repeatable adjustment of surface images to a pre-defined or desired spatial geometry.

  10. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  11. Integral imaging based 3D display of holographic data.

    PubMed

    Yöntem, Ali Özgür; Onural, Levent

    2012-10-22

    We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement.

  12. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  13. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen.

  14. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  15. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  16. Latest applications of 3D ToF-SIMS bio-imaging.

    PubMed

    Fletcher, John S

    2015-03-10

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is a rapidly developing technique for the characterization of a wide range of materials. Recently, advances in instrumentation and sample preparation approaches have provided the ability to perform 3D molecular imaging experiments. Polyatomic ion beams, such as C60, and gas cluster ion beams, often Arn (n = 500-4000), substantially reduce the subsurface damage accumulation associated with continued bombardment of organic samples with atomic beams. In this review, the capabilities of the technique are discussed and examples of the 3D imaging approach for the analysis of model membrane systems, plant single cell, and tissue samples are presented. Ongoing challenges for 3D ToF-SIMS imaging are also discussed along with recent developments that might offer improved 3D imaging prospects in the near future.

  17. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  18. Quantification of thyroid volume using 3-D ultrasound imaging.

    PubMed

    Kollorz, E K; Hahn, D A; Linke, R; Goecke, T W; Hornegger, J; Kuwert, T

    2008-04-01

    Ultrasound (US) is among the most popular diagnostic techniques today. It is non-invasive, fast, comparably cheap, and does not require ionizing radiation. US is commonly used to examine the size, and structure of the thyroid gland. In clinical routine, thyroid imaging is usually performed by means of 2-D US. Conventional approaches for measuring the volume of the thyroid gland or its nodules may therefore be inaccurate due to the lack of 3-D information. This work reports a semi-automatic segmentation approach for the classification, and analysis of the thyroid gland based on 3-D US data. The images are scanned in 3-D, pre-processed, and segmented. Several pre-processing methods, and an extension of a commonly used geodesic active contour level set formulation are discussed in detail. The results obtained by this approach are compared to manual interactive segmentations by a medical expert in five representative patients. Our work proposes a novel framework for the volumetric quantification of thyroid gland lobes, which may also be expanded to other parenchymatous organs.

  19. 3D imaging of biological specimen using MS.

    PubMed

    Fletcher, John S

    2015-01-01

    Imaging MS can provide unique information about the distribution of native and non-native compounds in biological specimen. MALDI MS and secondary ion MS are the two most commonly applied imaging MS techniques and can provide complementary information about a sample. MALDI offers access to high mass species such as proteins while secondary ion MS can operate at higher spatial resolution and provide information about lower mass species including elemental signals. Imaging MS is not limited to two dimensions and different approaches have been developed that allow 3D molecular images to be generated of chemicals in whole organs down to single cells. Resolution in the z-dimension is often higher than in x and y, so such analysis offers the potential for probing the distribution of drug molecules and studying drug action by MS with a much higher precision - possibly even organelle level.

  20. 3D Gabor wavelet based vessel filtering of photoacoustic images.

    PubMed

    Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi

    2016-08-01

    Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.

  1. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  2. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  3. High-throughput imaging: Focusing in on drug discovery in 3D.

    PubMed

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening.

  4. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  5. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  6. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  7. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  8. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  9. A single element 3D ultrasound tomography system.

    PubMed

    Xiang Zhang; Fincke, Jonathan; Kuzmin, Andrey; Lempitsky, Victor; Anthony, Brian

    2015-08-01

    Over the past decade, substantial effort has been directed toward developing ultrasonic systems for medical imaging. With advances in computational power, previously theorized scanning methods such as ultrasound tomography can now be realized. In this paper, we present the design, error analysis, and initial backprojection images from a single element 3D ultrasound tomography system. The system enables volumetric pulse-echo or transmission imaging of distal limbs. The motivating clinical applications include: improving prosthetic fittings, monitoring bone density, and characterizing muscle health. The system is designed as a flexible mechanical platform for iterative development of algorithms targeting imaging of soft tissue and bone. The mechanical system independently controls movement of two single element ultrasound transducers in a cylindrical water tank. Each transducer can independently circle about the center of the tank as well as move vertically in depth. High resolution positioning feedback (~1μm) and control enables flexible positioning of the transmitter and the receiver around the cylindrical tank; exchangeable transducers enable algorithm testing with varying transducer frequencies and beam geometries. High speed data acquisition (DAQ) through a dedicated National Instrument PXI setup streams digitized data directly to the host PC. System positioning error has been quantified and is within limits for the imaging requirements of the motivating applications.

  10. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  11. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  12. 3D geometry-based quantification of colocalizations in multichannel 3D microscopy images of human soft tissue tumors.

    PubMed

    Wörz, Stefan; Sander, Petra; Pfannmöller, Martin; Rieker, Ralf J; Joos, Stefan; Mechtersheimer, Gunhild; Boukamp, Petra; Lichter, Peter; Rohr, Karl

    2010-08-01

    We introduce a new model-based approach for automatic quantification of colocalizations in multichannel 3D microscopy images. The approach uses different 3D parametric intensity models in conjunction with a model fitting scheme to localize and quantify subcellular structures with high accuracy. The central idea is to determine colocalizations between different channels based on the estimated geometry of the subcellular structures as well as to differentiate between different types of colocalizations. A statistical analysis was performed to assess the significance of the determined colocalizations. This approach was used to successfully analyze about 500 three-channel 3D microscopy images of human soft tissue tumors and controls.

  13. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  14. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  15. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  16. Radon transform based automatic metal artefacts generation for 3D threat image projection

    NASA Astrophysics Data System (ADS)

    Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

    2013-10-01

    Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

  17. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  18. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields.

  19. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  20. Effective classification of 3D image data using partitioning methods

    NASA Astrophysics Data System (ADS)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  1. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  2. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  3. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  4. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  5. 3-D Imaging of Partly Concealed Targets by Laser Radar

    DTIC Science & Technology

    2005-10-01

    laser in the green wavelength region was used for illumination. 3-D Imaging of Partly Concealed Targets by Laser Radar 11 - 8 RTO-MP-SET-094...acknowledge Marie Carlsson and Ann Charlotte Gustavsson for their assistance in some of the experiments. 7.0 REFERENCES [1] U. Söderman, S. Ahlberg...SPIE Vol. 3707, pp. 432-448, USA, 1999. [14] D. Letalick, H. Larsson, M. Carlsson, and A.-C. Gustavsson , “Laser sensors for urban warfare,” FOI

  6. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  7. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    NASA Astrophysics Data System (ADS)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  8. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    NASA Astrophysics Data System (ADS)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  9. Multi-fields direct design approach in 3D: calculating a two-surface freeform lens with an entrance pupil for line imaging systems.

    PubMed

    Nie, Yunfeng; Thienpont, Hugo; Duerr, Fabian

    2015-12-28

    Including an entrance pupil in optical systems provides clear benefits for balancing the overall performance of freeform and/or rotationally symmetric imaging systems. Current existing direct design methods that are based on perfect imaging of few discrete ray bundles are not well suited for wide field of view systems. In this paper, a three-dimensional multi-fields direct design approach is proposed to balance the full field imaging performance of a two-surface freeform lens. The optical path lengths and image points of numerous fields are calculated during the procedures, wherefore very few initial parameters are needed in advance. Design examples of a barcode scanner lens as well as a line imaging objective are introduced to demonstrate the effectiveness of this method.

  10. Open-GL-based stereo system for 3D measurements

    NASA Astrophysics Data System (ADS)

    Boochs, Frank; Gehrhoff, Anja; Neifer, Markus

    2000-05-01

    A stereo system designed and used for the measurement of 3D- coordinates within metric stereo image pairs will be presented. First, the motivation for the development is shown, allowing to evaluate stereo images. As the use and availability of metric images of digital type rapidly increases corresponding equipment for the measuring process is needed. Systems which have been developed up to now are either very special ones, founded on high end graphics workstations with an according pricing or simple ones with restricted measuring functionality. A new conception will be shown, avoiding special high end graphics hardware but providing the measuring functionality required. The presented stereo system is based on PC-hardware equipped with a graphic board and uses an object oriented programming technique. The specific needs of a measuring system are shown and the corresponding requirements which have to be met by the system. The key role of OpenGL is described, which supplies some elementary graphic functions, being directly supported by graphic boards and thus provides the performance needed. Further important aspects as modularity and hardware independence and their value for the solution are shown. Finally some sample functions concerned with image display and handling are presented in more detail.

  11. Digital holography particle image velocimetry for the measurement of 3D t-3c flows

    NASA Astrophysics Data System (ADS)

    Shen, Gongxin; Wei, Runjie

    2005-10-01

    In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique—digital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

  12. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  13. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  14. Imaging Shallow Salt With 3D Refraction Migration

    NASA Astrophysics Data System (ADS)

    Vanschuyver, C. J.; Hilterman, F. J.

    2005-05-01

    In offshore West Africa, numerous salt walls are within 200 m of sea level. Because of the shallowness of these salt walls, reflections from the salt top can be difficult to map, making it impossible to build an accurate velocity model for subsequent pre-stack depth migration. An accurate definition of salt boundaries is critical to any depth model where salt is present. Unfortunately, when a salt body is very shallow, the reflection from the upper interface can be obscured due to large offsets between the source and near receivers and also due to the interference from multiples and other near-surface noise events. A new method is described using 3D migration of the refraction waveforms which is simplified because of several constraints in the model definition. The azimuth and dip of the refractor is found by imaging with Kirchhoff theory. A Kirchhoff migration is performed where the traveltime values are adjusted to use the CMP refraction traveltime equation. I assume the sediment and salt velocities to be known such that once the image time is specified, then the dip and azimuth of the refraction path can be found. The resulting 3D refraction migrations are in excellent depth agreement with available well control. In addition, the refraction migration time picks of deeper salt events are in agreement with time picks of the same events on the reflection migration.

  15. 3-D visualization and animation technologies in anatomical imaging.

    PubMed

    McGhee, John

    2010-02-01

    This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.

  16. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  17. 3-D Imaging and Simulation for Nephron Sparing Surgical Training.

    PubMed

    Ahmadi, Hamed; Liu, Jen-Jane

    2016-08-01

    Minimally invasive partial nephrectomy (MIPN) is now considered the procedure of choice for small renal masses largely based on functional advantages over traditional open surgery. Lack of haptic feedback, the need for spatial understanding of tumor borders, and advanced operative techniques to minimize ischemia time or achieve zero-ischemia PN are among factors that make MIPN a technically demanding operation with a steep learning curve for inexperienced surgeons. Surgical simulation has emerged as a useful training adjunct in residency programs to facilitate the acquisition of these complex operative skills in the setting of restricted work hours and limited operating room time and autonomy. However, the majority of available surgical simulators focus on basic surgical skills, and procedure-specific simulation is needed for optimal surgical training. Advances in 3-dimensional (3-D) imaging have also enhanced the surgeon's ability to localize tumors intraoperatively. This article focuses on recent procedure-specific simulation models for laparoscopic and robotic-assisted PN and advanced 3-D imaging techniques as part of pre- and some cases, intraoperative surgical planning.

  18. 0.18 μm CMOS fully differential CTIA for a 32x16 ROIC for 3D ladar imaging systems

    NASA Astrophysics Data System (ADS)

    Helou, Jirar N.; Garcia, Jorge; Sarmiento, Mayra; Kiamilev, Fouad; Lawler, William

    2006-08-01

    We describe a 2-D fully differential Readout Integrated Circuit (ROIC) designed to convert the photocurrents from an array of differential metal-semiconductor-metal (MSM) detectors into voltage signals suitable for digitization and post processing. The 2-D MSM array and CMOS ROIC are designed to function as a front-end module for an amplitude modulated/continuous time AM/CW 3-D Ladar imager under development at the Army Research Laboratory. One important aspect of our ROIC design is scalability. Within reasonable power consumption and photodetector size constraints, the ROIC architecture presented here scales up linearly without compromising complexity. The other key feature of our ROIC design is the mitigation of local oscillator coupling. In our ladar imaging application, the signal demodulation process that takes place in the MSM detectors introduces parasitic radio frequency (rf) currents that can be 4 to 5 orders of magnitude greater than the signal of interest. We present a fully-differential photodetector architecture and a circuit level solution to reduce the parasitic effect. As a proof of principle we have fabricated a 0.18 μm CMOS 32x16 fully differential ROIC with an array of 32 correlated double sampling (cds) capacitive transimpedance amplifiers (CTIAs), and a custom printed circuit board equipped to verify the test chip functionality. In this paper we discuss the fully differential IC design architecture and implementation and present the future testing strategy.

  19. Abdominal aortic aneurysm imaging with 3-D ultrasound: 3-D-based maximum diameter measurement and volume quantification.

    PubMed

    Long, A; Rouet, L; Debreuve, A; Ardon, R; Barbe, C; Becquemin, J P; Allaire, E

    2013-08-01

    The clinical reliability of 3-D ultrasound imaging (3-DUS) in quantification of abdominal aortic aneurysm (AAA) was evaluated. B-mode and 3-DUS images of AAAs were acquired for 42 patients. AAAs were segmented. A 3-D-based maximum diameter (Max3-D) and partial volume (Vol30) were defined and quantified. Comparisons between 2-D (Max2-D) and 3-D diameters and between orthogonal acquisitions were performed. Intra- and inter-observer reproducibility was evaluated. Intra- and inter-observer coefficients of repeatability (CRs) were less than 5.18 mm for Max3-D. Intra-observer and inter-observer CRs were respectively less than 6.16 and 8.71 mL for Vol30. The mean of normalized errors of Vol30 was around 7%. Correlation between Max2-D and Max3-D was 0.988 (p < 0.0001). Max3-D and Vol30 were not influenced by a probe rotation of 90°. Use of 3-DUS to quantify AAA is a new approach in clinical practice. The present study proposed and evaluated dedicated parameters. Their reproducibility makes the technique clinically reliable.

  20. 3D city models completion by fusing lidar and image data

    NASA Astrophysics Data System (ADS)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  1. New techniques of determining focus position in gamma knife operation using 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Xiong, Yingen; Wang, Dezong; Zhou, Quan

    1994-09-01

    In this paper, new techniques of determining the focus of a disease position in a gamma knife operation are presented. In these techniques, the transparent 3D color image of the human body organ is reconstructed using a new three-dimensional reconstruction method, and then the position, the area, and the volume of focus of a disease such as cancer or a tumor are calculated. They are used in the gamma knife operation. The CT pictures are input into a digital image processing system. The useful information is extracted and the original data are obtained. Then the transparent 3D color image is reconstructed using these original data. By using this transparent 3D color image, the positions of the human body organ and the focus of a disease are determined in a coordinate system. While the 3D image is reconstructed, the area and the volume of human body organ and focus of a disease can be calculated at the same time. It is expressed through actual application that the positions of human body organ and focus of a disease can be determined exactly by using the transparent 3D color image. It is very useful in gamma knife operation or other surgical operation. The techniques presented in this paper have great application value.

  2. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  3. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  4. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  5. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  6. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    PubMed

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models.

  7. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the

  8. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    NASA Astrophysics Data System (ADS)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  9. Repositioning accuracy of two different mask systems-3D revisited: Comparison using true 3D/3D matching with cone-beam CT

    SciTech Connect

    Boda-Heggemann, Judit . E-mail: judit.boda-heggemann@radonk.ma.uni-heidelberg.de; Walter, Cornelia; Rahn, Angelika; Wertz, Hansjoerg; Loeb, Iris; Lohr, Frank; Wenz, Frederik

    2006-12-01

    Purpose: The repositioning accuracy of mask-based fixation systems has been assessed with two-dimensional/two-dimensional or two-dimensional/three-dimensional (3D) matching. We analyzed the accuracy of commercially available head mask systems, using true 3D/3D matching, with X-ray volume imaging and cone-beam CT. Methods and Materials: Twenty-one patients receiving radiotherapy (intracranial/head-and-neck tumors) were evaluated (14 patients with rigid and 7 with thermoplastic masks). X-ray volume imaging was analyzed online and offline separately for the skull and neck regions. Translation/rotation errors of the target isocenter were analyzed. Four patients were treated to neck sites. For these patients, repositioning was aided by additional body tattoos. A separate analysis of the setup error on the basis of the registration of the cervical vertebra was performed. The residual error after correction and intrafractional motility were calculated. Results: The mean length of the displacement vector for rigid masks was 0.312 {+-} 0.152 cm (intracranial) and 0.586 {+-} 0.294 cm (neck). For the thermoplastic masks, the value was 0.472 {+-} 0.174 cm (intracranial) and 0.726 {+-} 0.445 cm (neck). Rigid masks with body tattoos had a displacement vector length in the neck region of 0.35 {+-} 0.197 cm. The intracranial residual error and intrafractional motility after X-ray volume imaging correction for rigid masks was 0.188 {+-} 0.074 cm, and was 0.134 {+-} 0.14 cm for thermoplastic masks. Conclusions: The results of our study have demonstrated that rigid masks have a high intracranial repositioning accuracy per se. Given the small residual error and intrafractional movement, thermoplastic masks may also be used for high-precision treatments when combined with cone-beam CT. The neck region repositioning accuracy was worse than the intracranial accuracy in both cases. However, body tattoos and image guidance improved the accuracy. Finally, the combination of both mask

  10. High Time Resolution Photon Counting 3D Imaging Sensors

    NASA Astrophysics Data System (ADS)

    Siegmund, O.; Ertley, C.; Vallerga, J.

    2016-09-01

    Novel sealed tube microchannel plate (MCP) detectors using next generation cross strip (XS) anode readouts and high performance electronics have been developed to provide photon counting imaging sensors for Astronomy and high time resolution 3D remote sensing. 18 mm aperture sealed tubes with MCPs and high efficiency Super-GenII or GaAs photocathodes have been implemented to access the visible/NIR regimes for ground based research, astronomical and space sensing applications. The cross strip anode readouts in combination with PXS-II high speed event processing electronics can process high single photon counting event rates at >5 MHz ( 80 ns dead-time per event), and time stamp events to better than 25 ps. Furthermore, we are developing a high speed ASIC version of the electronics for low power/low mass spaceflight applications. For a GaAs tube the peak quantum efficiency has degraded from 30% (at 560 - 850 nm) to 25% over 4 years, but for Super-GenII tubes the peak quantum efficiency of 17% (peak at 550 nm) has remained unchanged for over 7 years. The Super-GenII tubes have a uniform spatial resolution of <30 μm FWHM ( 1 x106 gain) and single event timing resolution of 100 ps (FWHM). The relatively low MCP gain photon counting operation also permits longer overall sensor lifetimes and high local counting rates. Using the high timing resolution, we have demonstrated 3D object imaging with laser pulse (630 nm 45 ps jitter Pilas laser) reflections in single photon counting mode with spatial and depth sensitivity of the order of a few millimeters. A 50 mm Planacon sealed tube was also constructed, using atomic layer deposited microchannel plates which potentially offer better overall sealed tube lifetime, quantum efficiency and gain stability. This tube achieves standard bialkali quantum efficiency levels, is stable, and has been coupled to the PXS-II electronics and used to detect and image fast laser pulse signals.

  11. 3D deterministic lateral displacement separation systems

    NASA Astrophysics Data System (ADS)

    Du, Siqi; Drazer, German

    2016-11-01

    We present a simple modification to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move parallel to the basal plane of the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems and the observed trajectories can be predicted based on a model developed in the 2D case. More importantly, we analyze the particles out-of-plane motion and observe significant differences in the net displacement depending on particle size. Therefore, taking advantage of both the in-plane and out-of-plane motion of the particles, it is possible to achieve the simultaneous fractionation of a polydisperse suspension into multiple streams. We also discuss other modifications to the obstacle array and driving forces that could enhance separation in microfluidic devices.

  12. Denoising for 3-d photon-limited imaging data using nonseparable filterbanks.

    PubMed

    Santamaria-Pang, Alberto; Bildea, Teodor Stefan; Tan, Shan; Kakadiaris, Ioannis A

    2008-12-01

    In this paper, we present a novel frame-based denoising algorithm for photon-limited 3-D images. We first construct a new 3-D nonseparable filterbank by adding elements to an existing frame in a structurally stable way. In contrast with the traditional 3-D separable wavelet system, the new filterbank is capable of using edge information in multiple directions. We then propose a data-adaptive hysteresis thresholding algorithm based on this new 3-D nonseparable filterbank. In addition, we develop a new validation strategy for denoising of photon-limited images containing sparse structures, such as neurons (the structure of interest is less than 5% of total volume). The validation method, based on tubular neighborhoods around the structure, is used to determine the optimal threshold of the proposed denoising algorithm. We compare our method with other state-of-the-art methods and report very encouraging results on applications utilizing both synthetic and real data.

  13. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  14. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  15. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  16. Constraining 3D Process Sedimentological Models to Geophysical Data Using Image Quilting

    NASA Astrophysics Data System (ADS)

    Tahmasebi, P.; Da Pra, A.; Pontiggia, M.; Caers, J.

    2014-12-01

    3D process geological models, whether for carbonate or sedimentological systems, have been proposed for modeling realistic subsurface heterogeneity. The problem with such forward process models is that they are not constrained to any subsurface data whether to wells or geophysical surveys. We propose a new method for realistic geological modeling of complex heterogeneity by hybridizing 3D process modeling of geological deposition with conditioning by means of a novel multiple-point geostatistics (MPS) technique termed image quilting (IQ). Image quilting is a pattern-based techniques that stiches together patterns extracted from training images to generate stochastic realizations that look like the training image. In this paper, we illustrate how 3D process model realizations can be used as training images in image quilting. To constrain the realization to seismic data we first interpret each facies in the geophysical data. These interpretation, while overly smooth and not reflecting finer scale variation are used as auxiliary variables in the generation of the image quilting realizations. To condition to well data, we first perform a kriging of the well data to generate a kriging map and kriging variance. The kriging map is used as additional auxiliary variable while the kriging variance is used as a weight given to the kriging derived auxiliary variable. We present an application to a giant offshore reservoir. Starting from seismic advanced attribute analysis and sedimentological interpretation, we build the 3D sedimentological process based model and use it as non-stationary training image for conditional image quilting.

  17. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  18. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  19. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  20. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    PubMed

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p < 0.05). Assigned from the morphology of patients' knees, the PSI represents the convergence of congruent designs with current personalized treatment tools. The PSI can achieve less extremity alignment and greater accuracy of prosthesis implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  1. 3-D Velocity Measurement of Natural Convection Using Image Processing

    NASA Astrophysics Data System (ADS)

    Shinoki, Masatoshi; Ozawa, Mamoru; Okada, Toshifumi; Kimura, Ichiro

    This paper describes quantitative three-dimensional measurement method for flow field of a rotating Rayleigh-Benard convection in a cylindrical cell heated below and cooled above. A correlation method for two-dimensional measurement was well advanced to a spatio-temporal correlation method. Erroneous vectors, often appeared in the correlation method, was successfully removed using Hopfield neural network. As a result, calculated 3-D velocity vector distribution well corresponded to the observed temperature distribution. Consequently, the simultaneous three-dimensional measurement system for temperature and flow field was developed.

  2. Imaging topological radar for 3D imaging in cultural heritage reproduction and restoration

    NASA Astrophysics Data System (ADS)

    Poggi, Claudio; Guarneri, Massimiliano; Fornetti, Giorgio; Ferri de Collibus, Mario; De Dominicis, Luigi; Paglia, Emiliano; Ricci, Roberto

    2005-10-01

    We present the last results obtained by using our Imaging Topological Radar (ITR), an high resolution laser scanner aimed at reconstruction 3D digital models of real targets, either single objects or complex scenes. The system, based on amplitude modulation ranging technique, enables to obtain simultaneously a shade-free, high resolution, photographic-like picture and accurate range data in the form of a range image, with resolution depending mainly on the laser modulation frequency (current best performance are ~100μm). The complete target surface is reconstructed from sampled points by using specifically developed software tools. The system has been successfully applied to scan different types of real surfaces (stone, wood, alloy, bones) and is suitable of relevant applications in different fields, ranging from industrial machining to medical diagnostics. We present some relevant examples of 3D reconstruction in the heritage field. Such results were obtained during recent campaigns carried out in situ in various Italian historical and archaeological sites (S. Maria Antiqua in Roman Forum, "Grotta dei cervi" Porto Badisco - Lecce, South Italy). The presented 3D models will be used by cultural heritage conservation authorities for restoration purpose and will available on the Internet for remote inspection.

  3. Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments

    PubMed Central

    Welf, Erik S.; Driscoll, Meghan K.; Dean, Kevin M.; Schäfer, Claudia; Chu, Jun; Davidson, Michael W.; Lin, Michael Z.; Danuser, Gaudenz; Fiolka, Reto

    2016-01-01

    The microenvironment determines cell behavior, but the underlying molecular mechanisms are poorly understood because quantitative studies of cell signaling and behavior have been challenging due to insufficient spatial and/or temporal resolution and limitations on microenvironmental control. Here we introduce microenvironmental selective plane illumination microscopy (meSPIM) for imaging and quantification of intracellular signaling and submicrometer cellular structures as well as large-scale cell morphological and environmental features. We demonstrate the utility of this approach by showing that the mechanical properties of the microenvironment regulate the transition of melanoma cells from actin-driven protrusion to blebbing, and we present tools to quantify how cells manipulate individual collagen fibers. We leverage the nearly isotropic resolution of meSPIM to quantify the local concentration of actin and phosphatidylinositol 3-kinase signaling on the surfaces of cells deep within 3D collagen matrices and track the many small membrane protrusions that appear in these more physiologically relevant environments. PMID:26906741

  4. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  5. 3D imaging of semiconductor components by discrete laminography

    NASA Astrophysics Data System (ADS)

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  6. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  7. 3D surface scan of biological samples with a Push-broom Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Kincaid, Russell; Hruska, Zuzana; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2013-08-01

    The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made significant progress in recent years. Specifically, hyperspectral imaging has shown its potential for surface contamination detection in many food related applications. Most existing hyperspectral imaging systems use pushbroom scanning which is generally used for flat surface inspection. In some applications it is desirable to be able to acquire hyperspectral images on circular objects such as corn ears, apples, and cucumbers. Past research describes inspection systems that examine all surfaces of individual objects. Most of these systems did not employ hyperspectral imaging. These systems typically utilized a roller to rotate an object, such as an apple. During apple rotation, the camera took multiple images in order to cover the complete surface of the apple. The acquired image data lacked the spectral component present in a hyperspectral image. This paper discusses the development of a hyperspectral imaging system for a 3-D surface scan of biological samples. The new instrument is based on a pushbroom hyperspectral line scanner using a rotational stage to turn the sample. The system is suitable for whole surface hyperspectral imaging of circular objects. In addition to its value to the food industry, the system could be useful for other applications involving 3-D surface inspection.

  8. A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics.

    PubMed

    Yoshino, Masanori; Saito, Toki; Kin, Taichi; Nakagawa, Daichi; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2015-01-01

    Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

  9. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  10. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  11. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  12. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  13. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    PubMed Central

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-01-01

    Background Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily. PMID:15757508

  14. Real-time, high-accuracy 3D imaging and shape measurement.

    PubMed

    Nguyen, Hieu; Nguyen, Dung; Wang, Zhaoyang; Kieu, Hien; Le, Minh

    2015-01-01

    In spite of the recent advances in 3D shape measurement and geometry reconstruction, simultaneously achieving fast-speed and high-accuracy performance remains a big challenge in practice. In this paper, a 3D imaging and shape measurement system is presented to tackle such a challenge. The fringe-projection-profilometry-based system employs a number of advanced approaches, such as: composition of phase-shifted fringe patterns, externally triggered synchronization of system components, generalized system setup, ultrafast phase-unwrapping algorithm, flexible system calibration method, robust gamma correction scheme, multithread computation and processing, and graphics-processing-unit-based image display. Experiments have shown that the proposed system can acquire and display high-quality 3D reconstructed images and/or video stream at a speed of 45 frames per second with relative accuracy of 0.04% or at a reduced speed of 22.5 frames per second with enhanced accuracy of 0.01%. The 3D imaging and shape measurement system shows great promise of satisfying the ever-increasing demands of scientific and engineering applications.

  15. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    NASA Astrophysics Data System (ADS)

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  16. 3D texture analysis in renal cell carcinoma tissue image grading.

    PubMed

    Kim, Tae-Yun; Cho, Nam-Hoon; Jeong, Goo-Bo; Bengtsson, Ewert; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  17. 3-D Ultrafast Doppler Imaging Applied to the Noninvasive and Quantitative Imaging of Blood Vessels in Vivo

    PubMed Central

    Provost, J.; Papadacci, C.; Demene, C.; Gennisson, J-L.; Tanter, M.; Pernot, M.

    2016-01-01

    Ultrafast Doppler Imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D Ultrafast Ultrasound Imaging, a technique that can produce thousands of ultrasound volumes per second, based on three-dimensional plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that non-invasive 3-D Ultrafast Power Doppler, Pulsed Doppler, and Color Doppler Imaging can be used to perform quantitative imaging of blood vessels in humans when using coherent compounding of three-dimensional tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D Ultrafast Imaging. Using a 32X32, 3-MHz matrix phased array (Vermon, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. 3-D Ultrafast Power Doppler Imaging was first validated by imaging Tygon tubes of varying diameter and its in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D Color and Pulsed Doppler Imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer. PMID:26276956

  18. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  19. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  20. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional doc