Automatic 3D power line reconstruction of multi-angular imaging power line inspection system
NASA Astrophysics Data System (ADS)
Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei
2007-06-01
We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.
NASA Astrophysics Data System (ADS)
Plotnikov, Illya; Vourlidas, Angelos; Tylka, Allan J.; Pinto, Rui; Rouillard, Alexis; Tirole, Margot
2016-07-01
Identifying the physical mechanisms that produce the most energetic particles is a long-standing observational and theoretical challenge in astrophysics. Strong pressure waves have been proposed as efficient accelerators both in the solar and astrophysical contexts via various mechanisms such as diffusive-shock/shock-drift acceleration and betatron effects. In diffusive-shock acceleration, the efficacy of the process relies on shock waves being super-critical or moving several times faster than the characteristic speed of the medium they propagate through (a high Alfven Mach number) and on the orientation of the magnetic field upstream of the shock front. High-cadence, multipoint imaging using the NASA STEREO, SOHO and SDO spacecrafts now permits the 3-D reconstruction of pressure waves formed during the eruption of coronal mass ejections. Using these unprecedented capabilities, some recent studies have provided new insights on the timing and longitudinal extent of solar energetic particles, including the first derivations of the time-dependent 3-dimensional distribution of the expansion speed and Mach numbers of coronal shock waves. We will review these recent developments by focusing on particle events that occurred between 2011 and 2015. These new techniques also provide the opportunity to investigate the enigmatic long-duration gamma ray events.
3D Ion Temperature Reconstruction
NASA Astrophysics Data System (ADS)
Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi
2009-11-01
The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.
Forensic 3D Scene Reconstruction
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.
1999-10-12
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
Forensic 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.
2000-05-01
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
3D puzzle reconstruction for archeological fragments
NASA Astrophysics Data System (ADS)
Jampy, F.; Hostein, A.; Fauvet, E.; Laligant, O.; Truchetet, F.
2015-03-01
The reconstruction of broken artifacts is a common task in archeology domain; it can be supported now by 3D data acquisition device and computer processing. Many works have been dedicated in the past to reconstructing 2D puzzles but very few propose a true 3D approach. We present here a complete solution including a dedicated transportable 3D acquisition set-up and a virtual tool with a graphic interface allowing the archeologists to manipulate the fragments and to, interactively, reconstruct the puzzle. The whole lateral part is acquired by rotating the fragment around an axis chosen within a light sheet thanks to a step-motor synchronized with the camera frame clock. Another camera provides a top view of the fragment under scanning. A scanning accuracy of 100μm is attained. The iterative automatic processing algorithm is based on segmentation into facets of the lateral part of the fragments followed by a 3D matching providing the user with a ranked short list of possible assemblies. The device has been applied to the reconstruction of a set of 1200 fragments from broken tablets supporting a Latin inscription dating from the first century AD.
3D EIT image reconstruction with GREIT.
Grychtol, Bartłomiej; Müller, Beat; Adler, Andy
2016-06-01
Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184
The PRISM3D paleoenvironmental reconstruction
Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.
2010-01-01
The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may
3D model reconstruction of underground goaf
NASA Astrophysics Data System (ADS)
Fang, Yuanmin; Zuo, Xiaoqing; Jin, Baoxuan
2005-10-01
Constructing 3D model of underground goaf, we can control the process of mining better and arrange mining work reasonably. However, the shape of goaf and the laneway among goafs are very irregular, which produce great difficulties in data-acquiring and 3D model reconstruction. In this paper, we research on the method of data-acquiring and 3D model construction of underground goaf, building topological relation among goafs. The main contents are as follows: a) The paper proposed an efficient encoding rule employed to structure the field measurement data. b) A 3D model construction method of goaf is put forward, which by means of combining several TIN (triangulated irregular network) pieces, and an efficient automatic processing algorithm of boundary of TIN is proposed. c) Topological relation of goaf models is established. TIN object is the basic modeling element of goaf 3D model, and the topological relation among goaf is created and maintained by building the topological relation among TIN objects. Based on this, various 3D spatial analysis functions can be performed including transect and volume calculation of goaf. A prototype is developed, which can realized the model and algorithm proposed in this paper.
IFSAR processing for 3D target reconstruction
NASA Astrophysics Data System (ADS)
Austin, Christian D.; Moses, Randolph L.
2005-05-01
In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.
3D reconstruction of tensors and vectors
Defrise, Michel; Gullberg, Grant T.
2005-02-17
Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.
Adapting 3D Equilibrium Reconstruction to Reconstruct Weakly 3D H-mode Tokamaks
NASA Astrophysics Data System (ADS)
Cianciosa, M. R.; Hirshman, S. P.; Seal, S. K.; Unterberg, E. A.; Wilcox, R. S.; Wingen, A.; Hanson, J. D.
2015-11-01
The application of resonant magnetic perturbations for edge localized mode (ELM) mitigation breaks the toroidal symmetry of tokamaks. In these scenarios, the axisymmetric assumptions of the Grad-Shafranov equation no longer apply. By extension, equilibrium reconstruction tools, built around these axisymmetric assumptions, are insufficient to fully reconstruct a 3D perturbed equilibrium. 3D reconstruction tools typically work on systems where the 3D components of signals are a significant component of the input signals. In nominally axisymmetric systems, applied field perturbations can be on the order of 1% of the main field or less. To reconstruct these equilibria, the 3D component of signals must be isolated from the axisymmetric portions to provide the necessary information for reconstruction. This presentation will report on the adaptation to V3FIT for application on DIII-D H-mode discharges with applied resonant magnetic perturbations (RMPs). Newly implemented motional stark effect signals and modeling of electric field effects will also be discussed. Work supported under U.S. DOE Cooperative Agreement DE-AC05-00OR22725.
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
Automated 3D reconstruction of interiors with multiple scan views
NASA Astrophysics Data System (ADS)
Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.
1998-12-01
This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.
3D Surface Reconstruction and Automatic Camera Calibration
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre
2004-01-01
Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.
3D scene reconstruction based on 3D laser point cloud combining UAV images
NASA Astrophysics Data System (ADS)
Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen
2016-03-01
It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.
3D temperature field reconstruction using ultrasound sensing system
NASA Astrophysics Data System (ADS)
Liu, Yuqian; Ma, Tong; Cao, Chengyu; Wang, Xingwei
2016-04-01
3D temperature field reconstruction is of practical interest to the power, transportation and aviation industries and it also opens up opportunities for real time control or optimization of high temperature fluid or combustion process. In our paper, a new distributed optical fiber sensing system consisting of a series of elements will be used to generate and receive acoustic signals. This system is the first active temperature field sensing system that features the advantages of the optical fiber sensors (distributed sensing capability) and the acoustic sensors (non-contact measurement). Signals along multiple paths will be measured simultaneously enabled by a code division multiple access (CDMA) technique. Then a proposed Gaussian Radial Basis Functions (GRBF)-based approach can approximate the temperature field as a finite summation of space-dependent basis functions and time-dependent coefficients. The travel time of the acoustic signals depends on the temperature of the media. On this basis, the Gaussian functions are integrated along a number of paths which are determined by the number and distribution of sensors. The inversion problem to estimate the unknown parameters of the Gaussian functions can be solved with the measured times-of-flight (ToF) of acoustic waves and the length of propagation paths using the recursive least square method (RLS). The simulation results show an approximation error less than 2% in 2D and 5% in 3D respectively. It demonstrates the availability and efficiency of our proposed 3D temperature field reconstruction mechanism.
3D Equilibrium Reconstructions in DIII-D
NASA Astrophysics Data System (ADS)
Lao, L. L.; Ferraro, N. W.; Strait, E. J.; Turnbull, A. D.; King, J. D.; Hirshman, H. P.; Lazarus, E. A.; Sontag, A. C.; Hanson, J.; Trevisan, G.
2013-10-01
Accurate and efficient 3D equilibrium reconstruction is needed in tokamaks for study of 3D magnetic field effects on experimentally reconstructed equilibrium and for analysis of MHD stability experiments with externally imposed magnetic perturbations. A large number of new magnetic probes have been recently installed in DIII-D to improve 3D equilibrium measurements and to facilitate 3D reconstructions. The V3FIT code has been in use in DIII-D to support 3D reconstruction and the new magnetic diagnostic design. V3FIT is based on the 3D equilibrium code VMEC that assumes nested magnetic surfaces. V3FIT uses a pseudo-Newton least-square algorithm to search for the solution vector. In parallel, the EFIT equilibrium reconstruction code is being extended to allow for 3D effects using a perturbation approach based on an expansion of the MHD equations. EFIT uses the cylindrical coordinate system and can include the magnetic island and stochastic effects. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria directly making use of plasma response to 3D perturbations from the GATO, MARS-F, or M3D-C1 MHD codes. DIII-D 3D reconstruction examples using EFIT and V3FIT and the new 3D magnetic data will be presented. Work supported in part by US DOE under DE-FC02-04ER54698, DE-FG02-95ER54309 and DE-AC05-06OR23100.
3D Building Reconstruction Using Dense Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.
2016-06-01
Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
Tomographic system for 3D temperature reconstruction
NASA Astrophysics Data System (ADS)
Antos, Martin; Malina, Radomir
2003-11-01
The novel laboratory system for the optical tomography is used to obtain three-dimensional temperature field around a heated element. The Mach-Zehnder holographic interferometers with diffusive illumination of the phase object provide the possibility to scan of multidirectional holographic interferograms in the range of viewing angles from 0 deg to 108 deg. These interferograms form the input data for the computer tomography of the 3D distribution of the refractive index variation, which characterizes the physical state of the studied medium. The configuration of the system allows automatic projection scanning of the studied phase object. The computer calculates the wavefront deformation for each projection, making use of different methods of Fourier-transform and phase-sampling evaluations. The experimental set-up together with experimental results is presented.
3D scene reconstruction from multi-aperture images
NASA Astrophysics Data System (ADS)
Mao, Miao; Qin, Kaihuai
2014-04-01
With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.
Accuracy of 3d Reconstruction in AN Illumination Dome
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay; Toschi, Isabella; Nocerino, Erica; Hess, Mona; Remondino, Fabio; Robson, Stuart
2016-06-01
The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.
Improving 3D Genome Reconstructions Using Orthologous and Functional Constraints
Diament, Alon; Tuller, Tamir
2015-01-01
The study of the 3D architecture of chromosomes has been advancing rapidly in recent years. While a number of methods for 3D reconstruction of genomic models based on Hi-C data were proposed, most of the analyses in the field have been performed on different 3D representation forms (such as graphs). Here, we reproduce most of the previous results on the 3D genomic organization of the eukaryote Saccharomyces cerevisiae using analysis of 3D reconstructions. We show that many of these results can be reproduced in sparse reconstructions, generated from a small fraction of the experimental data (5% of the data), and study the properties of such models. Finally, we propose for the first time a novel approach for improving the accuracy of 3D reconstructions by introducing additional predicted physical interactions to the model, based on orthologous interactions in an evolutionary-related organism and based on predicted functional interactions between genes. We demonstrate that this approach indeed leads to the reconstruction of improved models. PMID:26000633
Tomographic compressive holographic reconstruction of 3D objects
NASA Astrophysics Data System (ADS)
Nehmetallah, G.; Williams, L.; Banerjee, P. P.
2012-10-01
Compressive holography with multiple projection tomography is applied to solve the inverse ill-posed problem of reconstruction of 3D objects with high axial accuracy. To visualize the 3D shape, we propose Digital Tomographic Compressive Holography (DiTCH), where projections from more than one direction as in tomographic imaging systems can be employed, so that a 3D shape with better axial resolution can be reconstructed. We compare DiTCH with single-beam holographic tomography (SHOT) which is based on Fresnel back-propagation. A brief theory of DiTCH is presented, and experimental results of 3D shape reconstruction of objects using DITCH and SHOT are compared.
3-D flame temperature field reconstruction with multiobjective neural network
NASA Astrophysics Data System (ADS)
Wan, Xiong; Gao, Yiqing; Wang, Yuanmei
2003-02-01
A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
3D Reconstruction For The Detection Of Cranial Anomalies
NASA Astrophysics Data System (ADS)
Kettner, B.; Shalev, S.; Lavelle, C.
1986-01-01
There is a growing interest in the use of three-dimensional (3D) cranial reconstruction from CT scans for surgical planning. A low-cost imaging system has been developed, which provides pseudo-3D images which may be manipulated to reveal the craniofacial skeleton as a whole or any particular component region. The contrast between congenital (hydrocephalic), normocephalic and acquired (carcinoma of the maxillary sinus) anomalous cranial forms demonstrates the potential of this system.
Bound constrained bundle adjustment for reliable 3D reconstruction.
Gong, Yuanzheng; Meng, De; Seibel, Eric J
2015-04-20
Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment. PMID:25969115
Bound constrained bundle adjustment for reliable 3D reconstruction
Gong, Yuanzheng; Meng, De; Seibel, Eric J.
2015-01-01
Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment. PMID:25969115
3D scanning modeling method application in ancient city reconstruction
NASA Astrophysics Data System (ADS)
Ren, Pu; Zhou, Mingquan; Du, Guoguang; Shui, Wuyang; Zhou, Pengbo
2015-07-01
With the development of optical engineering technology, the precision of 3D scanning equipment becomes higher, and its role in 3D modeling is getting more distinctive. This paper proposed a 3D scanning modeling method that has been successfully applied in Chinese ancient city reconstruction. On one hand, for the existing architectures, an improved algorithm based on multiple scanning is adopted. Firstly, two pieces of scanning data were rough rigid registered using spherical displacers and vertex clustering method. Secondly, a global weighted ICP (iterative closest points) method is used to achieve a fine rigid registration. On the other hand, for the buildings which have already disappeared, an exemplar-driven algorithm for rapid modeling was proposed. Based on the 3D scanning technology and the historical data, a system approach was proposed for 3D modeling and virtual display of ancient city.
New Reconstruction Accuracy Metric for 3D PIV
NASA Astrophysics Data System (ADS)
Bajpayee, Abhishek; Techet, Alexandra
2015-11-01
Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.
3D Reconstruction of Human Motion from Monocular Image Sequences.
Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo
2016-08-01
This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439
Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard; Neal, Douglas
2011-11-01
A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.
3D medical volume reconstruction using web services.
Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter
2008-04-01
We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope. PMID:18336808
3D video sequence reconstruction algorithms implemented on a DSP
NASA Astrophysics Data System (ADS)
Ponomaryov, V. I.; Ramos-Diaz, E.
2011-03-01
A novel approach for 3D image and video reconstruction is proposed and implemented. This is based on the wavelet atomic functions (WAF) that have demonstrated better approximation properties in different processing problems in comparison with classical wavelets. Disparity maps using WAF are formed, and then they are employed in order to present 3D visualization using color anaglyphs. Additionally, the compression via Pth law is performed to improve the disparity map quality. Other approaches such as optical flow and stereo matching algorithm are also implemented as the comparative approaches. Numerous simulation results have justified the efficiency of the novel framework. The implementation of the proposed algorithm on the Texas Instruments DSP TMS320DM642 permits to demonstrate possible real time processing mode during 3D video reconstruction for images and video sequences.
Incremental volume reconstruction and rendering for 3-D ultrasound imaging
NASA Astrophysics Data System (ADS)
Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry
1992-09-01
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-03-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-01-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
Reconstruction and 3D visualisation based on objective real 3D based documentation.
Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A
2012-09-01
Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.
On detailed 3D reconstruction of large indoor environments
NASA Astrophysics Data System (ADS)
Bondarev, Egor
2015-03-01
In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.
A new algorithm for 3D reconstruction from support functions.
Gardner, Richard J; Kiderlen, Markus
2009-03-01
We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab, and it works for both 2D and 3D reconstructions (in fact, in principle, in any dimension). Reconstructions may be obtained without any pre- or post-processing steps and with no restriction on the sets of measurement directions except their number, a limitation dictated only by computing time. An algorithm due to Prince and Willsky was implemented earlier for 2D reconstructions, and we compare the performance of their algorithm and ours. But our algorithm is the first that works for 3D reconstructions with the freedom stated in the previous paragraph. Moreover, under mild conditions, theory guarantees that outputs of the new algorithm will converge to the input shape as the number of measurements increases. In addition we offer a linear program version of the new algorithm that is much faster and better, or at least comparable, in performance at low levels of noise and reasonably small numbers of measurements. Another modification of the algorithm, suitable for use in a "focus of attention" scheme, is also described. PMID:19147881
3D reconstruction methods of coronal structures by radio observations
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-11-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
3D reconstruction methods of coronal structures by radio observations
NASA Technical Reports Server (NTRS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-01-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
Reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa
2013-08-01
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3
Scattering robust 3D reconstruction via polarized transient imaging.
Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai
2016-09-01
Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Optical Sensors and Methods for Underwater 3D Reconstruction.
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Structured Light-Based 3D Reconstruction System for Plants.
Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima
2015-01-01
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701
3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.
Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun
2016-08-01
Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. PMID:27200484
Structured Light-Based 3D Reconstruction System for Plants
Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima
2015-01-01
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
New method for 3D reconstruction in digital tomosynthesis
NASA Astrophysics Data System (ADS)
Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2002-05-01
Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.
3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells
Luo, Tong; Chen, Huan; Kassab, Ghassan S.
2016-01-01
Aims The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function. PMID:26882342
Dose fractionation theorem in 3-D reconstruction (tomography)
Glaeser, R.M.
1997-02-01
It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resulted in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.
3-D Printed High Power Microwave Magnetrons
NASA Astrophysics Data System (ADS)
Jordan, Nicholas; Greening, Geoffrey; Exelby, Steven; Gilgenbach, Ronald; Lau, Y. Y.; Hoff, Brad
2015-11-01
The size, weight, and power requirements of HPM systems are critical constraints on their viability, and can potentially be improved through the use of additive manufacturing techniques, which are rapidly increasing in capability and affordability. Recent experiments on the UM Recirculating Planar Magnetron (RPM), have explored the use of 3-D printed components in a HPM system. The system was driven by MELBA-C, a Marx-Abramyan system which delivers a -300 kV voltage pulse for 0.3-1.0 us, with a 0.15-0.3 T axial magnetic field applied by a pair of electromagnets. Anode blocks were printed from Water Shed XC 11122 photopolymer using a stereolithography process, and prepared with either a spray-coated or electroplated finish. Both manufacturing processes were compared against baseline data for a machined aluminum anode, noting any differences in power output, oscillation frequency, and mode stability. Evolution and durability of the 3-D printed structures were noted both visually and by tracking vacuum inventories via a residual gas analyzer. Research supported by AFOSR (grant #FA9550-15-1-0097) and AFRL.
One-step reconstruction of assembled 3D holographic scenes
NASA Astrophysics Data System (ADS)
Velez Zea, Alejandro; Barrera-Ramírez, John Fredy; Torroba, Roberto
2015-12-01
We present a new experimental approach for reconstructing in one step 3D scenes otherwise not feasible in a single snapshot from standard off-axis digital hologram architecture, due to a lack of illuminating resources or a limited setup size. Consequently, whenever a scene could not be wholly illuminated or the size of the scene surpasses the available setup disposition, this protocol can be implemented to solve these issues. We need neither to alter the original setup in every step nor to cover the whole scene by the illuminating source, thus saving resources. With this technique we multiplex the processed holograms of actual diffuse objects composing a scene using a two-beam off-axis holographic setup in a Fresnel approach. By registering individually the holograms of several objects and applying a spatial filtering technique, the filtered Fresnel holograms can then be added to produce a compound hologram. The simultaneous reconstruction of all objects is performed in one step using the same recovering procedure employed for single holograms. Using this technique, we were able to reconstruct, for the first time to our knowledge, a scene by multiplexing off-axis holograms of the 3D objects without cross talk. This technique is important for quantitative visualization of optically packaged multiple images and is useful for a wide range of applications. We present experimental results to support the method.
Real-Time Camera Guidance for 3d Scene Reconstruction
NASA Astrophysics Data System (ADS)
Schindler, F.; Förstner, W.
2012-07-01
We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.
3D segmentation and reconstruction of endobronchial ultrasound
NASA Astrophysics Data System (ADS)
Zang, Xiaonan; Breslav, Mikhail; Higgins, William E.
2013-03-01
State-of-the-art practice for lung-cancer staging bronchoscopy often draws upon a combination of endobronchial ultrasound (EBUS) and multidetector computed-tomography (MDCT) imaging. While EBUS offers real-time in vivo imaging of suspicious lesions and lymph nodes, its low signal-to-noise ratio and tendency to exhibit missing region-of-interest (ROI) boundaries complicate diagnostic tasks. Furthermore, past efforts did not incorporate automated analysis of EBUS images and a subsequent fusion of the EBUS and MDCT data. To address these issues, we propose near real-time automated methods for three-dimensional (3D) EBUS segmentation and reconstruction that generate a 3D ROI model along with ROI measurements. Results derived from phantom data and lung-cancer patients show the promise of the methods. In addition, we present a preliminary image-guided intervention (IGI) system example, whereby EBUS imagery is registered to a patient's MDCT chest scan.
Clinical Experience With A Portable 3-D Reconstruction Program
NASA Astrophysics Data System (ADS)
Holshouser, Barbara A.; Christiansen, Edwin L.; Thompson, Joseph R.; Reynolds, R. Anthony; Goldwasser, Samuel M.
1988-06-01
Clinical experience with a computer program for reconstructing and visualizing three-dimensional (3-D) structures is reported. Applications to the study of soft-tissue and skeletal structures, such as the temporomandibular joint and craniofacial anatomy, using computed tomography (CT) data are described. Several features specific to the computer algorithm are demonstrated and evaluated. These include: (1) manipulation of density windows to selectively visualize bone or soft tissue structures; (2) the efficacy of gradient shading algorithms in revealing fine surface detail; and (3) the rapid generation of cut-away views revealing details of internal structures. Also demonstrated is the importance of high resolution data as input to the 3-D program. The implementation of the program (VoxelView-32) described here, is on a MASSCOMP computer running UNIX. Data were collected with General Electric or Siemens CT scanners and transferred to the MASSCOMP for off-line 3-D recon-struction, via magnetic tape or Ethernet. An interactive graphics facility on the MASSCOMP allows viewing of 2-D slices, subregioning, and selection of lower and upper density thresholds for segmentation. The software then enters a pre-processing phase during which a volume representation of the segmented object (soft tissue or bone) is automatically created. This is followed by a rendering phase during which multiple views of the segmented object are automatically generated. The pre-processing phase typically takes 4 to 8 minutes (although very large datasets may require as much as 30 minutes) and the rendering phase typically takes 1 to 2 minutes for each 3-D view. Volume representation and rendering techniques are used at all stages of the processing, and gradient shading is used for enhanced surface detail.
3D-reconstruction of blood vessels by ultramicroscopy
Jährling, Nina; Becker, Klaus
2009-01-01
As recently shown, ultramicroscopy (UM) allows 3D-visualization of even large microscopic structures with µm resolution. Thus, it can be applied to anatomical studies of numerous biological and medical specimens. We reconstructed the three-dimensional architecture of tomato-lectin (Lycopersicon esculentum) stained vascular networks by UM in whole mouse organs. The topology of filigree branches of the microvasculature was visualized. Since tumors require an extensive growth of blood vessels to survive, this novel approach may open up new vistas in neurobiology and histology, particularly in cancer research. PMID:20539742
Facial-paralysis diagnostic system based on 3D reconstruction
NASA Astrophysics Data System (ADS)
Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee
2015-05-01
The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.
The sinogram polygonizer for reconstructing 3D shapes.
Yamanaka, Daiki; Ohtake, Yutaka; Suzuki, Hiromasa
2013-11-01
This paper proposes a novel approach, the sinogram polygonizer, for directly reconstructing 3D shapes from sinograms (i.e., the primary output from X-ray computed tomography (CT) scanners consisting of projection image sequences of an object shown from different viewing angles). To obtain a polygon mesh approximating the surface of a scanned object, a grid-based isosurface polygonizer, such as Marching Cubes, has been conventionally applied to the CT volume reconstructed from a sinogram. In contrast, the proposed method treats CT values as a continuous function and directly extracts a triangle mesh based on tetrahedral mesh deformation. This deformation involves quadratic error metric minimization and optimal Delaunay triangulation for the generation of accurate, high-quality meshes. Thanks to the analytical gradient estimation of CT values, sharp features are well approximated, even though the generated mesh is very coarse. Moreover, this approach eliminates aliasing artifacts on triangle meshes. PMID:24029910
The Sinogram Polygonizer for Reconstructing 3D Shapes.
Yamanaka, Daiki; Ohtake, Yutaka; Suzuki, Hiromasa
2013-05-24
This paper proposes a novel approach, the sinogram polygonizer, for directly reconstructing 3D shapes from sinograms (i.e., the primary output from X-ray computed tomography (CT) scanners consisting of projection image sequences of an object shown from different viewing angles). To obtain a polygon mesh approximating the surface of a scanned object, a grid-based isosurface polygonizer, such as Marching Cubes, has been conventionally applied to the CT volume reconstructed from a sinogram. In contrast, the proposed method treats CT values as a continuous function and directly extracts a triangle mesh based on tetrahedral mesh deformation. This deformation involves quadratic error metric minimization and optimal Delaunay triangulation for the generation of accurate, high-quality meshes. Thanks to the analytical gradient estimation of CT values, sharp features are well approximated, even though the generated mesh is very coarse. Moreover, this approach eliminates aliasing artifacts on triangle meshes. PMID:23712999
Digital Reconstruction of 3D Polydisperse Dry Foam
NASA Astrophysics Data System (ADS)
Chieco, A.; Feitosa, K.; Roth, A. E.; Korda, P. T.; Durian, D. J.
2012-02-01
Dry foam is a disordered packing of bubbles that distort into familiar polyhedral shapes. We have implemented a method that uses optical axial tomography to reconstruct the internal structure of a dry foam in three dimensions. The technique consists of taking a series of photographs of the dry foam against a uniformly illuminated background at successive angles. By summing the projections we create images of the foam cross section. Image analysis of the cross sections allows us to locate Plateau borders and vertices. The vertices are then connected according to Plateau's rules to reconstruct the internal structure of the foam. Using this technique we are able to visualize a large number of bubbles of real 3D foams and obtain statistics of faces and edges.
Discussion of Source Reconstruction Models Using 3D MCG Data
NASA Astrophysics Data System (ADS)
Melis, Massimo De; Uchikawa, Yoshinori
In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.
Computerized 3-D reconstruction of two "double teeth".
Lyroudia, K; Mikrogeorgis, G; Nikopoulos, N; Samakovitis, G; Molyvdas, I; Pitas, I
1997-10-01
"Double teeth" is a root malformation in the dentition and the purpose of this study was to reconstruct three-dimensionally the external and internal morphology of two "double teeth". The first set of "double teeth" was formed by the conjunction of a mandibular molar and a premolar, and the second by a conjunction of a maxillary molar and a supernumerary tooth. The process of 3-D reconstruction included serial cross-sectioning, photographs of the sections, digitization of the photographs, extraction of the boundaries of interest for each section, surface representation using triangulation and, finally, surface rendering using photorealistic effects. The resulting three-dimensional representations of the two teeth helped us visualize their external and internal anatomy. The results showed: a) in the first case, fusion of the radical and coronal dentin, as well as fusion of the pulp chambers; and b) in the second case, fusion only of the radical dentin and the pulp chambers. PMID:9550051
Digital 3D facial reconstruction of George Washington
NASA Astrophysics Data System (ADS)
Razdan, Anshuman; Schwartz, Jeff; Tocheri, Mathew; Hansford, Dianne
2006-02-01
PRISM is a focal point of interdisciplinary research in geometric modeling, computer graphics and visualization at Arizona State University. Many projects in the last ten years have involved laser scanning, geometric modeling and feature extraction from such data as archaeological vessels, bones, human faces, etc. This paper gives a brief overview of a recently completed project on the 3D reconstruction of George Washington (GW). The project brought together forensic anthropologists, digital artists and computer scientists in the 3D digital reconstruction of GW at 57, 45 and 19 including detailed heads and bodies. Although many other scanning projects such as the Michelangelo project have successfully captured fine details via laser scanning, our project took it a step further, i.e. to predict what that individual (in the sculpture) might have looked like both in later and earlier years, specifically the process to account for reverse aging. Our base data was GWs face mask at Morgan Library and Hudons bust of GW at Mount Vernon, both done when GW was 53. Additionally, we scanned the statue at the Capitol in Richmond, VA; various dentures, and other items. Other measurements came from clothing and even portraits of GW. The digital GWs were then milled in high density foam for a studio to complete the work. These will be unveiled at the opening of the new education center at Mt Vernon in fall 2006.
3D Reconstruction of virtual colon structures from colonoscopy images.
Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C
2014-01-01
This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230
Fast vision-based catheter 3D reconstruction.
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D
2016-07-21
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms. PMID:27352011
Fast vision-based catheter 3D reconstruction
NASA Astrophysics Data System (ADS)
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.
2016-07-01
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.
3D Reconstruction of Irregular Buildings and Buddha Statues
NASA Astrophysics Data System (ADS)
Zhang, K.; Li, M.-j.
2014-04-01
Three-dimensional laser scanning could acquire object's surface data quickly and accurately. However, the post-processing of point cloud is not perfect and could be improved. Based on the study of 3D laser scanning technology, this paper describes the details of solutions to modelling irregular ancient buildings and Buddha statues in Jinshan Temple, which aiming at data acquisition, modelling and texture mapping, etc. In order to modelling irregular ancient buildings effectively, the structure of each building is extracted manually by point cloud and the textures are mapped by the software of 3ds Max. The methods clearly combine 3D laser scanning technology with traditional modelling methods, and greatly improves the efficiency and accuracy of the ancient buildings restored. On the other hand, the main idea of modelling statues is regarded as modelling objects in reverse engineering. The digital model of statues obtained is not just vivid, but also accurate in the field of surveying and mapping. On this basis, a 3D scene of Jinshan Temple is reconstructed, which proves the validity of the solutions.
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
Fast fully 3-D image reconstruction in PET using planograms.
Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W
2004-04-01
We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067
Colored 3D surface reconstruction using Kinect sensor
NASA Astrophysics Data System (ADS)
Guo, Lian-peng; Chen, Xiang-ning; Chen, Ying; Liu, Bin
2015-03-01
A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.
3D reconstruction of rotational video microscope based on patches
NASA Astrophysics Data System (ADS)
Ma, Shijie; Qu, Yufu
2015-11-01
Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
The new CORIMP CME catalog & 3D reconstructions
NASA Astrophysics Data System (ADS)
Byrne, Jason; Morgan, Huw; Gallagher, Peter; Habbal, Shadia; Davies, Jackie
2015-04-01
A new coronal mass ejection catalog has been built from a unique set of coronal image processing techniques, called CORIMP, that overcomes many of the limitations of current catalogs in operation. An online database has been produced for the SOHO/LASCO data and event detections therein; providing information on CME onset time, position angle, angular width, speed, acceleration, and mass, along with kinematic plots and observation movies. The high-fidelity and robustness of these methods and derived CME structure and kinematics will lead to an improved understanding of the dynamics of CMEs, and a realtime version of the algorithm has been implemented to provide CME detection alerts to the interested space weather community. Furthermore, STEREO data has been providing the ability to perform 3D reconstructions of CMEs that are observed in multipoint observations. This allows a determination of the 3D kinematics and morphologies of CMEs characterised in STEREO data via the 'elliptical tie-pointing' technique. The associated observations of SOHO, SDO and PROBA2 (and intended use of K-Cor) provide additional measurements and constraints on the CME analyses in order to improve their accuracy.
3D imaging reconstruction and impacted third molars: case reports
Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea
2012-01-01
Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
Gene Electrotransfer in 3D Reconstructed Human Dermal Tissue.
Madi, Moinecha; Rols, Marie-Pierre; Gibot, Laure
2016-01-01
Gene electrotransfer into the skin is of particular interest for the development of medical applications including DNA vaccination, cancer treatment, wound healing or treatment of local skin disorders. However, such clinical applications are currently limited due to poor understanding of the mechanisms governing DNA electrotransfer within human tissue. Nowadays, most studies are carried out in rodent models but rodent skin varies from human skin in terms of cell composition and architecture. We used a tissue-engineering approach to study gene electrotransfer mechanisms in a human tissue context. Primary human dermal fibroblasts were cultured according to the self-assembly method to produce 3D reconstructed human dermal tissue. In this study, we showed that cells of the reconstructed cutaneous tissue were efficiently electropermeabilized by applying millisecond electric pulses, without affecting their viability. A reporter gene was successfully electrotransferred into this human tissue and gene expression was detected for up to 48h. Interestingly, the transfected cells were solely located on the upper surface of the tissue, where they were in close contact with plasmid DNA solution. Furthermore, we report evidences that electrotransfection success depends on plasmid mobility within tissue- rich in collagens, but not on cell proliferation status. In conclusion, in addition to proposing a reliable alternative to animal experiments, tissue engineering produces valid biological tool for the in vitro study of gene electrotransfer mechanisms in human tissue. PMID:27029947
Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces
NASA Astrophysics Data System (ADS)
Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf
2016-06-01
The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.
NASA Astrophysics Data System (ADS)
Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella
2015-09-01
Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.
Diachronic 3d Reconstruction for Lost Cultural Heritage
NASA Astrophysics Data System (ADS)
Guidi, G.; Russo, M.
2011-09-01
Cultural Heritage artifacts can often be underestimated for their hidden presence in the landscape. Such problem is particularly large in countries like Italy, where the massive amount of "famous" artifacts tends to neglect other presences unless properly exposed, or when the remains are dramatically damaged leaving very few interpretation clues to the visitor. In such cases a virtual presentation of the Cultural Heritage site can be of great help, specially for explaining the evolution of its status, giving sometimes sense to few spare stones. The definition of these digital representations deal with two crucial aspects: on the one hand the possibility of 3D surveying the relics in order to have an accurate geometrical image of the current status of the artifact; on the other hand the presence of historical sources both in form of written text or images, that once properly matched with the current geometrical data, may help to recreate digitally a set of 3D models representing visually the various historical phases (diachronic model), up to the current one. The core of this article is the definition of an integrated methodology that starts from an high-resolution digital survey of the remains of an ancient building and develops a coherent virtual reconstruction from different historical sources, suggesting a scalable method suitable to be re-used for generating a 4D (geometry + time) model of the artifact. This approach has been experimented on the "Basilica di San Giovanni in Conca" in Milan, a very significant example for its complex historic evolution that combines evident historic values with an invisible presence inside the city.
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
3D reconstruction of hollow parts analyzing images acquired by a fiberscope
NASA Astrophysics Data System (ADS)
Icasio-Hernández, Octavio; Gonzalez-Barbosa, José-Joel; Hurtado-Ramos, Juan B.; Viliesid-Alonso, Miguel
2014-07-01
A modified fiberscope used to reconstruct difficult-to-reach inner structures is presented. By substituting the fiberscope’s original illumination system, we can project a profile-revealing light line inside the object of study. The light line is obtained using a sandwiched power light-emitting diode (LED) attached to an extension arm on the tip of the fiberscope. Profile images from the interior of the object are then captured by a camera attached to the fiberscope’s eyepiece. Using a series of those images at different positions, the system is capable of generating a 3D reconstruction of the object with submillimeter accuracy. Also proposed is the use of a combination of known filters to remove the honeycomb structures produced by the fiberscope and the use of ring gages to obtain the extrinsic parameters of the camera attached to the fiberscope and the metrological traceability of the system. Several standard ring diameter measurements were compared against their certified values to improve the accuracy of the system. To exemplify an application, a 3D reconstruction of the interior of a refrigerator duct was conducted. This reconstruction includes accuracy assessment by comparing the measurements of the system to a coordinate measuring machine. The system, as described, is capable of 3D reconstruction of the interior of objects with uniform and non-uniform profiles from 10 to 60 mm in transversal dimensions and a depth of 1000 mm if the material of the walls of the object is translucent and allows the detection of the power LED light from the exterior through the wall. If this is not possible, we propose the use of a magnetic scale which reduces the working depth to 170 mm. The assessed accuracy is around ±0.15 mm in 2D cross-section reconstructions and ±1.3 mm in 1D position using a magnetic scale, and ±0.5 mm using a CCD camera.
Height inspection of wafer bumps without explicit 3D reconstruction
NASA Astrophysics Data System (ADS)
Dong, Mei; Chung, Ronald; Zhao, Yang; Lam, Edmund Y.
2006-02-01
The shrunk dimension of electronic devices leads to more stringent requirement on process control and quality assurance of their fabrication. For instance, direct die-to-die bonding requires placement of solder bumps not on PCB but on the wafer itself. Such wafer solder bumps, which are much miniaturized from the counterparts on PCB, still need to have their heights meet the specification, or else the electrical connection could be compromised, or the dies be crushed, or even the manufacturing equipments be damaged. Yet the tiny size, typically tens of microns in diameter, and the textureless and mirror nature of the bumps pose great challenge to the 3D inspection process. This paper addresses how a large number of such wafer bumps could have their heights massively checked against the specification. We assume ball bumps in this work. We propose a novel inspection measure about the collection of bump heights that possesses these advantages: (1) it is sensitive to global and local disturbances to the bump heights, thus serving the bump height inspection purpose; (2) it is invariant to how individual bumps are locally displaced against one another on the substrate surface, thus enduring 2D displacement error in soldering the bumps onto the wafer substrate; and (3) it is largely invariant to how the wafer itself is globally positioned relative to the imaging system, thus having tolerance to repeatability error in wafer placement. This measure makes use of the mirror nature of the bumps, which used to cause difficulty in traditional inspection methods, to capture images of two planes. One contains the bump peaks and the other corresponds to the substrate. With the homography matrices of these two planes and fundamental matrix of the camera, we synthesize a matrix called Biplanar Disparity Matrix. This matrix can summarize the bumps' heights in a fast and direct way without going through explicit 3D reconstruction. We also present a design of the imaging and
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Assist feature printability prediction by 3-D resist profile reconstruction
NASA Astrophysics Data System (ADS)
Zheng, Xin; Huang, Jensheng; Chin, Fook; Kazarian, Aram; Kuo, Chun-Chieh
2012-06-01
properties may then be used to optimize the printability vs. efficacy of an SRAF either prior to or during an Optical Proximity Correction (OPC) run. The process models that are used during OPC have never been able to reliably predict which SRAFs will print. This appears to be due to the fact that OPC process models are generally created using data that does not include printed subresolution patterns. An enhancement to compact modeling capability to predict Assist Features (AF) printability is developed and discussed. A hypsometric map representing 3-D resist profile was built by applying a first principle approximation to estimate the "energy loss" from the resist top to bottom. Such a 3-D resist profile is an extrapolation of a well calibrated traditional OPC model without any additional information. Assist features are detected at either top of resist (dark field) or bottom of resist (bright field). Such detection can be done by just extracting top or bottom resist models from our 3-D resist model. There is no measurement of assist features needed when we build AF but it can be included if interested but focusing on resist calibration to account for both exposure dosage and focus change sensitivities. This approach significantly increases resist model's capability for predicting printed SRAF accuracy. And we don't need to calibrate an SRAF model in addition to the OPC model. Without increase in computation time, this compact model can draw assist feature contour with real placement and size at any vertical plane. The result is compared and validated with 3-D rigorous modeling as well as SEM images. Since this method does not change any form of compact modeling, it can be integrated into current MBAF solutions without any additional work.
NASA Astrophysics Data System (ADS)
Monserrat, Carlos; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Albalat, Salvador E.
1997-05-01
This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.
DIII-D Equilibrium Reconstructions with New 3D Magnetic Probes
NASA Astrophysics Data System (ADS)
Lao, Lang; Strait, E. J.; Ferraro, N. M.; Ferron, J. R.; King, J. D.; Lee, X.; Meneghini, O.; Turnbull, A. D.; Huang, Y.; Qian, J. G.; Wingen, A.
2015-11-01
DIII-D equilibrium reconstructions with the recently installed new 3D magnetic diagnostic are presented. In addition to providing information to allow more accurate 2D reconstructions, the new 3D probes also provide useful information to guide computation of 3D perturbed equilibria. A new more comprehensive magnetic compensation has been implemented. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria making use of the new 3D probes and plasma responses from 3D MHD codes such as GATO and M3D-C1. To improve the computation efficiency, all inactive probes in one of the toroidal planes in EFIT have been replaced with new probes from other planes. Other 3D efforts include testing of 3D reconstructions using V3FIT and a new 3D variational moment equilibrium code VMOM3D. Other EFIT developments include a GPU EFIT version and new safety factor and MSE-LS constraints. The accuracy and limitation of the new probes for 3D reconstructions will be discussed. Supported by US DOE under DE-FC02-04ER54698 and DE-FG02-95ER54309.
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
A graphic user interface for efficient 3D photo-reconstruction based on free software
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Gómez, Jose A.
2015-04-01
Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.
3D reconstruction of SEM images by use of optical photogrammetry software.
Eulitz, Mona; Reiss, Gebhard
2015-08-01
Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969
APPROXIMATION OF SURFACES IN QUANTITATIVE 3-D RECONSTRUCTIONS
In serial section reconstructions a series of planar profiles are taken representing curves on the surface of the structure to be reconstructed. or a number of quantitative serial section methods, approximation of a surface is done by the formation of tiles between points of adja...
3D reconstruction based on CT image and its application
NASA Astrophysics Data System (ADS)
Zhang, Jianxun; Zhang, Mingmin
2004-03-01
Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.
Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors.
Niklas, M; Bartz, J A; Akselrod, M S; Abollahi, A; Jäkel, O; Greilich, S
2013-09-21
Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo. PMID:23965401
Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors
NASA Astrophysics Data System (ADS)
Niklas, M.; Bartz, J. A.; Akselrod, M. S.; Abollahi, A.; Jäkel, O.; Greilich, S.
2013-09-01
Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo.
Fringe projection profilometry for panoramic 3D reconstruction
NASA Astrophysics Data System (ADS)
Almaraz-Cabral, César-Cruz; Gonzalez-Barbosa, José-Joel; Villa, Jesús; Hurtado-Ramos, Juan-Bautista; Ornelas-Rodriguez, Francisco-Javier; Córdova-Esparza, Diana-Margarita
2016-03-01
In this paper, we introduce a panoramic profilometric system to reconstruct inner cylindrical environments. The system projects circular fringes and uses a temporal phase unwrapping technique. The recovered phase map is used to reconstruct objects placed on the inner cylindrical surface. We derived a phase to depth conversion formula for this system. The use of fringe projection allows dense reconstructions. The panoramic system is composed by a digital projector, two parabolic mirrors and a CCD camera. All these components share a common axis with a reference cylinder. This paper presents results for distinct objects.
Automated reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.
Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.
3-D Virtual and Physical Reconstruction of Bendego Iron
NASA Astrophysics Data System (ADS)
Belmonte, S. L. R.; Zucolotto, M. E.; Fontes, R. C.; dos Santos, J. R. L.
2012-09-01
The use of 3D laser scanning to meteoritic to preserve the original shape of the meteorites before cutting and the facility of saved the datas in STL format (stereolithography) to print three-dimensional physical models and generate a digital replica.
The New Approach to Sport Medicine: 3-D Reconstruction
ERIC Educational Resources Information Center
Ince, Alparslan
2015-01-01
The aim of this study is to present a new approach to sport medicine. Comparative analysis of the Vertebrae Lumbales was done in sedentary group and Muay Thai athletes. It was done by acquiring three dimensional (3-D) data and models through photogrammetric methods from the Multi-detector Computerized Tomography (MDCT) images of the Vertebrae…
Robust 3D reconstruction system for human jaw modeling
NASA Astrophysics Data System (ADS)
Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.
1999-03-01
This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.
Online reconstruction of 3D magnetic particle imaging data.
Knopp, T; Hofmann, M
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668
Online reconstruction of 3D magnetic particle imaging data
NASA Astrophysics Data System (ADS)
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376
3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine
NASA Astrophysics Data System (ADS)
Hamamoto, Kazuhiko; Sato, Motoyoshi
3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.
Thermal infrared exploitation for 3D face reconstruction
NASA Astrophysics Data System (ADS)
Abayowa, Bernard O.
2009-05-01
Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.
Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.
Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed
2009-06-01
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272
Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction
NASA Astrophysics Data System (ADS)
Austin, Christian D.; Moses, Randolph L.
2006-05-01
This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.
Single view-based 3D face reconstruction robust to self-occlusion
NASA Astrophysics Data System (ADS)
Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie
2012-12-01
State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.
3D reconstruction of tropospheric cirrus clouds by stereovision system
NASA Astrophysics Data System (ADS)
Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid
2016-07-01
A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.
Method for 3D fibre reconstruction on a microrobotic platform.
Hirvonen, J; Myllys, M; Kallio, P
2016-07-01
Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. PMID:26695385
3D model tools for architecture and archaeology reconstruction
NASA Astrophysics Data System (ADS)
Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice
2016-06-01
The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.
Optic flow aided navigation and 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Rollason, Malcolm
2013-10-01
An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.
Quantitative Reconstructions of 3D Chemical Nanostructures in Nanowires.
Rueda-Fonseca, P; Robin, E; Bellet-Amalric, E; Lopez-Haro, M; Den Hertog, M; Genuist, Y; André, R; Artioli, A; Tatarenko, S; Ferrand, D; Cibert, J
2016-03-01
Energy dispersive X-ray spectrometry is used to extract a quantitative 3D composition profile of heterostructured nanowires. The analysis of hypermaps recorded along a limited number of projections, with a preliminary calibration of the signal associated with each element, is compared to the intensity profiles calculated for a model structure with successive shells of circular, elliptic, or faceted cross sections. This discrete tomographic technique is applied to II-VI nanowires grown by molecular beam epitaxy, incorporating ZnTe and CdTe and their alloys with Mn and Mg, with typical size down to a few nanometers and Mn or Mg content as low as 10%. PMID:26837636
3D microscopy - new powerful tools in geomaterials characterization
NASA Astrophysics Data System (ADS)
Mauko Pranjić, Alenka; Mladenovič, Ana; Turk, Janez; Šajna, Aljoša; Čretnik, Janko
2016-04-01
Microtomography (microCT) is becoming more and more widely recognized in geological sciences as a powerful tool for the spatial characterization of rock and other geological materials. Together with 3D image analysis and other complementary techniques, it has the characteristics of an innovative and non-destructive 3D microscopical technique. On the other hand its main disadvantages are low availability (only a few geological laboratories are equipped with high resolution tomographs), the relatively high prices of testing connected with the use of an xray source, technical limitations connected to the resolution and imaging of certain materials, as well as timeconsuming and complex 3D image analysis, necessary for quantification of 3D tomographic data sets. In this work three examples are presented of optimal 3D microscopy analysis of geomaterials in construction such as porosity characterization of impregnated sandstone, aerated concrete and marble prone to bowing. Studies include processes of microCT imaging, 3D data analysis and fitting of data with complementary analysis, such as confocal microscopy, mercury porosimetry, gas sorption, optical/fluorescent microscopy and scanning electron microscopy. Present work has been done in the frame of national research project 3D and 4D microscopy development of new powerful tools in geosciences (ARRS J1-7148) funded by Slovenian Research Agency.
3D reconstruction software comparison for short sequences
NASA Astrophysics Data System (ADS)
Strupczewski, Adam; Czupryński, BłaŻej
2014-11-01
Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.
Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2016-06-01
In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.
3D reconstruction with two webcams and a laser line projector
NASA Astrophysics Data System (ADS)
Li, Dongdong; Hui, Bingwei; Qiu, Shaohua; Wen, Gongjian
2014-09-01
Three-dimensional (3D) reconstruction is one of the most attractive research topics in photogrammetry and computer vision. Nowadays 3D reconstruction with simple and consumable equipment plays an important role. In this paper, a 3D reconstruction desktop system is built based on binocular stereo vision using a laser scanner. The hardware requirements are a simple commercial hand-held laser line projector and two common webcams for image acquisition. Generally, 3D reconstruction based on passive triangulation methods requires point correspondences among various viewpoints. The development of matching algorithms remains a challenging task in computer vision. In our proposal, with the help of a laser line projector, stereo correspondences are established robustly from epipolar geometry and the laser shadow on the scanned object. To establish correspondences more conveniently, epipolar rectification is employed using Bouguet's method after stereo calibration with a printed chessboard. 3D coordinates of the observed points are worked out with rayray triangulation and reconstruction outliers are removed with the planarity constraint of the laser plane. Dense 3D point clouds are derived from multiple scans under different orientations. Each point cloud is derived by sweeping the laser plane across the object requiring 3D reconstruction. The Iterative Closest Point algorithm is employed to register the derived point clouds. Rigid body transformation between neighboring scans is obtained to get the complete 3D point cloud. Finally polygon meshes are reconstructed from the derived point cloud and color images are used in texture mapping to get a lifelike 3D model. Experiments show that our reconstruction method is simple and efficient.
Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration
NASA Astrophysics Data System (ADS)
Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.
2012-02-01
The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.
3D digital breast tomosynthesis image reconstruction using anisotropic total variation minimization.
Seyyedi, Saeed; Yildirim, Isa
2014-01-01
This paper presents a compressed sensing based reconstruction method for 3D digital breast tomosynthesis (DBT) imaging. Algebraic reconstruction technique (ART) has been in use in DBT imaging by minimizing the isotropic total variation (TV) of the reconstructed image. The resolution in DBT differs in sagittal and axial directions which should be encountered during the TV minimization. In this study we develop a 3D anisotropic TV (ATV) minimization by considering the different resolutions in different directions. A customized 3D Shepp-logan phantom was generated to mimic a real DBT image by considering the overlapping tissue and directional resolution issues. Results of the ART, ART+3D TV and ART+3D ATV are compared using structural similarity (SSIM) diagram. PMID:25571377
3D parameter reconstruction in hyperspectral diffuse optical tomography
NASA Astrophysics Data System (ADS)
Saibaba, Arvind K.; Krishnamurthy, Nishanth; Anderson, Pamela G.; Kainerstorfer, Jana M.; Sassaroli, Angelo; Miller, Eric L.; Fantini, Sergio; Kilmer, Misha E.
2015-03-01
The imaging of shape perturbation and chromophore concentration using Diffuse Optical Tomography (DOT) data can be mathematically described as an ill-posed and non-linear inverse problem. The reconstruction algorithm for hyperspectral data using a linearized Born model is prohibitively expensive, both in terms of computation and memory. We model the shape of the perturbation using parametric level-set approach (PaLS). We discuss novel computational strategies for reducing the computational cost based on a Krylov subspace approach for parameteric linear systems and a compression strategy for the parameter-to-observation map. We will demonstrate the validity of our approach by comparison with experiments.
Robust registration for removing vibrations in 3D reconstruction of web material
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Garcia, Daniel F.
2015-05-01
Vibrations are a major challenge in laser-based 3D reconstruction of web material. In uncontrolled environments, the movement of web material forward along a track is inevitably affected by vibrations. These oscillations significantly degrade the performance of the 3D reconstruction system, as they are incorrectly interpreted as irregularities on the surface of the material, leading to an erroneous reconstruction of the 3D surface. This work proposes a method to estimate and remove these vibrations based on a robust registration procedure. Registration is used to estimate vibrations and a rigid transformation is used to compensate the movements, removing the effects of vibrations on 3D reconstruction. The proposed method is applied to an extensive dataset, both synthetic and real, with very good results.
NASA Astrophysics Data System (ADS)
Vallet, B.; Soheilian, B.; Brédif, M.
2014-08-01
The 3D reconstruction of similar 3D objects detected in 2D faces a major issue when it comes to grouping the 2D detections into clusters to be used to reconstruct the individual 3D objects. Simple clustering heuristics fail as soon as similar objects are close. This paper formulates a framework to use the geometric quality of the reconstruction as a hint to do a proper clustering. We present a methodology to solve the resulting combinatorial optimization problem with some simplifications and approximations in order to make it tractable. The proposed method is applied to the reconstruction of 3D traffic signs from their 2D detections to demonstrate its capacity to solve ambiguities.
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
Bayesian 3D velocity field reconstruction with VIRBIUS
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem
2016-03-01
I describe a new Bayesian-based algorithm to infer the full three dimensional velocity field from observed distances and spectroscopic galaxy catalogues. In addition to the velocity field itself, the algorithm reconstructs true distances, some cosmological parameters and specific non-linearities in the velocity field. The algorithm takes care of selection effects, miscalibration issues and can be easily extended to handle direct fitting of e.g. the inverse Tully-Fisher relation. I first describe the algorithm in details alongside its performances. This algorithm is implemented in the VIRBIUS (VelocIty Reconstruction using Bayesian Inference Software) software package. I then test it on different mock distance catalogues with a varying complexity of observational issues. The model proved to give robust measurement of velocities for mock catalogues of 3000 galaxies. I expect the core of the algorithm to scale to tens of thousands galaxies. It holds the promises of giving a better handle on future large and deep distance surveys for which individual errors on distance would impede velocity field inference.
Reconstructing 3-D Ship Motion for Synthetic Aperture Sonar Processing
NASA Astrophysics Data System (ADS)
Thomsen, D. R.; Chadwell, C. D.; Sandwell, D.
2004-12-01
We are investigating the feasibility of coherent ping-to-ping processing of multibeam sonar data for high-resolution mapping and change detection in the deep ocean. Theoretical calculations suggest that standard multibeam resolution can be improved from 100 m to ~10 m through coherent summation of pings similar to synthetic aperture radar image formation. A requirement for coherent summation of pings is to correct the phase of the return echoes to an accuracy of ~3 cm at a sampling rate of ~10 Hz. In September of 2003, we conducted a seagoing experiment aboard R/V Revelle to test these ideas. Three geodetic-quality GPS receivers were deployed to recover 3-D ship motion to an accuracy of +- 3cm at a 1 Hz sampling rate [Chadwell and Bock, GRL, 2001]. Additionally, inertial navigation data (INS) from fiber-optic gyroscopes and pendulum-type accelerometers were collected at a 10 Hz rate. Independent measurements of ship orientation (yaw, pitch, and roll) from the GPS and INS show agreement to an RMS accuracy of better than 0.1 degree. Because inertial navigation hardware is susceptible to drift, these measurements were combined with the GPS to achieve both high accuracy and high sampling rate. To preserve the short-timescale accuracy of the INS and the long-timescale accuracy of the GPS measurements, time-filtered differences between the GPS and INS were subtracted from the INS integrated linear velocities. An optimal filter length of 25 s was chosen to force the RMS difference between the GPS and the integrated INS to be on the order of the accuracy of the GPS measurements. This analysis provides an upper bound on 3-D ship motion accuracy. Additionally, errors in the attitude can translate to the projections of motion for individual hydrophones. With lever arms on the order of 5m, these errors will likely be ~1mm. Based on these analyses, we expect to achieve the 3-cm accuracy requirement. Using full-resolution hydrophone data collected by a SIMRAD EM/120 echo sounder
[3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].
Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu
2015-08-01
The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449
Image-Based 3d Reconstruction and Analysis for Orthodontia
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2012-08-01
Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.
Automatic Texture Reconstruction of 3d City Model from Oblique Images
NASA Astrophysics Data System (ADS)
Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang
2016-06-01
In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.
3D surface reconstruction based on image stitching from gastric endoscopic video sequence
NASA Astrophysics Data System (ADS)
Duan, Mengyao; Xu, Rong; Ohya, Jun
2013-09-01
This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.
Nguyen, Duc V; Vo, Quang N; Le, Lawrence H; Lou, Edmond H M
2015-02-01
Adolescent idiopathic scoliosis (AIS) is a three-dimensional deformity of spine associated with vertebra rotation. The Cobb angle and axial vertebral rotation are important parameters to assess the severity of scoliosis. However, the vertebral rotation is seldom measured from radiographs due to time consuming. Different techniques have been developed to extract 3D spinal information. Among many techniques, ultrasound imaging is a promising method. This pilot study reported an image processing method to reconstruct the posterior surface of vertebrae from 3D ultrasound data. Three cadaver vertebrae, a Sawbones spine phantom, and a spine from a child with AIS were used to validate the development. The in-vitro result showed the surface of the reconstructed image was visually similar to the original objects. The dimension measurement error was <5 mm and the Pearson correlation was >0.99. The results also showed a high accuracy in vertebral rotation with errors of 0.8 ± 0.3°, 2.8 ± 0.3° and 3.6 ± 0.5° for the rotation values of 0°, 15° and 30°, respectively. Meanwhile, the difference in the Cobb angle between the phantom and the image was 4° and the vertebral rotation at the apex was 2°. The Cobb angle measured from the in-vivo ultrasound image was 4° different from the radiograph. PMID:25550193
Visualization of 3D elbow kinematics using reconstructed bony surfaces
NASA Astrophysics Data System (ADS)
Lalone, Emily A.; McDonald, Colin P.; Ferreira, Louis M.; Peters, Terry M.; King, Graham J. W.; Johnson, James A.
2010-02-01
An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator. Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.
3D Reconstruction of a Rotating Erupting Prominence
NASA Technical Reports Server (NTRS)
Thompson, W. T.; Kliem, B.; Torok, T.
2011-01-01
A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight as it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 Angstrom line in EUVI and in the H alpha-sensitive bandpass of COR1 by both STEREO Ahead and Behind. These features could then be traced out in three dimensional space, and reprojected into a view in which the eruption is directed towards the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equals 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg. from the original filament orientation inferred from H alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation is reached within approximately equals 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.
3D Reconstruction of a Rotating Erupting Prominence
NASA Technical Reports Server (NTRS)
Thompson, W. T.; Kliem, B.; Toeroek, T.
2011-01-01
A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight a it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 A line in EUVI and in the H-alpha-sensitive bandpass of COR I by both STEREO Ahead and Behind. These features could then be traced out in three-dimensional space, and reprojected into a view in which the eruption is directed toward the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equal to 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg from the original filament orientation inferred from H-alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation are reached within approximately equal to 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.
Near-infrared optical imaging of human brain based on the semi-3D reconstruction algorithm
NASA Astrophysics Data System (ADS)
Liu, Ming; Meng, Wei; Qin, Zhuanping; Zhou, Xiaoqing; Zhao, Huijuan; Gao, Feng
2013-03-01
In the non-invasive brain imaging with near-infrared light, precise head model is of great significance to the forward model and the image reconstruction. To deal with the individual difference of human head tissues and the problem of the irregular curvature, in this paper, we extracted head structure with Mimics software from the MRI image of a volunteer. This scheme makes it possible to assign the optical parameters to every layer of the head tissues reasonably and solve the diffusion equation with the finite-element analysis. During the solution of the inverse problem, a semi-3D reconstruction algorithm is adopted to trade off the computation cost and accuracy between the full 3-D and the 2-D reconstructions. In this scheme, the changes in the optical properties of the inclusions are assumed either axially invariable or confined to the imaging plane, while the 3-D nature of the photon migration is still retained. This therefore leads to a 2-D inverse issue with the matched 3-D forward model. Simulation results show that comparing to the 3-D reconstruction algorithm, the Semi-3D reconstruction algorithm cut 27% the calculation time consumption.
3D reconstruction of a human heart fascicle using SurfDriver
NASA Astrophysics Data System (ADS)
Rader, Robert J.; Phillips, Steven J.; LaFollette, Paul S., Jr.
2000-06-01
The Temple University Medical School has a sequence of over 400 serial sections of adult normal ventricular human heart tissue, cut at 25 micrometer thickness. We used a Zeiss Ultraphot with a 4x planapo objective and a Pixera digital camera to make a series of 45 sequential montages to use in the 3D reconstruction of a fascicle (muscle bundle). We wrote custom software to merge 4 smaller image fields from each section into one composite image. We used SurfDriver software, developed by Scott Lozanoff of the University of Hawaii and David Moody of the University of Alberta, for registration, object boundary identification, and 3D surface reconstruction. We used an Epson Stylus Color 900 printer to get photo-quality prints. We describe the challenge and our solution to the following problems: image acquisition and digitization, image merge, alignment and registration, boundary identification, 3D surface reconstruction, 3D visualization and orientation, snapshot, and photo-quality prints.
A simple approach for 3D reconstruction of the spine from biplanar radiography
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Shi, Xinling; Lv, Liang; Guo, Fei; Zhang, Yufeng
2014-04-01
This paper proposed a simple approach for 3D spinal reconstruction from biplanar radiography. The proposed reconstruction consisted in reconstructing the 3D central curve of the spine based on the epipolar geometry and automatically aligning vertebrae under the constraint of this curve. The vertebral orientations were adjusted by matching the projections of the 3D pedicles with the 2D pedicles in biplanar radiographs. The user interaction time was within one minute for a thoracic spine. Sixteen pairs of radiographs of a thoracic spinal model were used to evaluate the precision and accuracy. The precision was within 3.1 mm for the location and 3.5° for the orientation. The accuracy was within 3.5 mm for the location and 3.9° for the orientation. These results demonstrate that this approach can be a promising tool to obtain the 3D spinal geometry with acceptable user interactions in scoliotic clinics.
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
Analysis of method of 3D shape reconstruction using scanning deflectometry
NASA Astrophysics Data System (ADS)
Novák, Jiří; Novák, Pavel; Mikš, Antonín.
2013-04-01
This work presents a scanning deflectometric approach to solving a 3D surface reconstruction problem, which is based on measurements of a surface gradient of optically smooth surfaces. It is shown that a description of this problem leads to a nonlinear partial differential equation (PDE) of the first order, from which the surface shape can be reconstructed numerically. The method for effective finding of the solution of this differential equation is proposed, which is based on the transform of the problem of PDE solving to the optimization problem. We describe different types of surface description for the shape reconstruction and a numerical simulation of the presented method is performed. The reconstruction process is analyzed by computer simulations and presented on examples. The performed analysis confirms a robustness of the reconstruction method and a good possibility for measurements and reconstruction of the 3D shape of specular surfaces.
A fast 3D reconstruction system with a low-cost camera accessory
NASA Astrophysics Data System (ADS)
Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.
2015-06-01
Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.
A fast 3D reconstruction system with a low-cost camera accessory
Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.
2015-01-01
Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407
3-D reconstruction of neurons from multichannel confocal laser scanning image series.
Wouterlood, Floris G
2014-01-01
A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. PMID:24723320
Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction
NASA Astrophysics Data System (ADS)
Yu, Qian; Helmholz, Petra; Belton, David
2016-06-01
In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.
3D reconstruction of a building from LIDAR data with first-and-last echo information
NASA Astrophysics Data System (ADS)
Zhang, Guoning; Zhang, Jixian; Yu, Jie; Yang, Haiquan; Tan, Ming
2007-11-01
With the aerial LIDAR technology developing, how to automatically recognize and reconstruct the buildings from LIDAR dataset is an important research topic along with the widespread applications of LIDAR data in city modeling, urban planning, etc.. Applying the information of the first-and-last echo data of the same laser point, in this paper, a scheme of 3D-reconstruction of simple building has been presented, which mainly include the following steps: the recognition of non-boundary building points and boundary building points and the generation of each building-point-cluster; the localization of the boundary of each building; the detection of the planes included in each cluster and the reconstruction of building in 3D form. Through experiment, it can be proved that for the LIDAR data with first-and-last echo information the scheme can effectively and efficiently 3D-reconstruct simple buildings, such as flat and gabled buildings.
Schmitt, J. C.; Talmadge, J. N.; Anderson, D. T.; Hanson, J. D.
2014-09-15
The bootstrap current for three electron cyclotron resonance heated plasma scenarios in a quasihelically symmetric stellarator (the Helically Symmetric Experiment) are analyzed and compared to a neoclassical transport code PENTA. The three conditions correspond to 50 kW input power with a resonance that is off-axis, 50 kW on-axis heating and 100 kW on-axis heating. When the heating location was moved from off-axis to on-axis with 50 kW heating power, the stored energy and the extrapolated steady-state current were both observed to increase. When the on-axis heating power was increased from 50 kW to 100 kW, the stored energy continued to increase while the bootstrap current slightly decreased. This trend is qualitatively in agreement with the calculations which indicate that a large positive electric field for the 100 kW case was driving the current negative in a small region close to the magnetic axis and accounting for the decrease in the total integrated current. This trend in the calculations is only observed to occur when momentum conservation between particle species is included. Without momentum conservation, the calculated bootstrap current increases monotonically. We show that the magnitude of the bootstrap current as calculated by PENTA agrees better with the experiment when momentum conservation between plasma species is included in the calculation. The total current was observed in all cases to flow in a direction to unwind the transform, unlike in a tokamak in which the bootstrap current adds to the transform. The 3-D inductive response of the plasma is simulated to predict the evolution of the current profile during the discharge. The 3-D equilibrium reconstruction code V3FIT is used to reconstruct profiles of the plasma pressure and current constrained by measurements with a set of magnetic diagnostics. The reconstructed profiles are consistent with the measured plasma pressure profile and the simulated current profile when the
NASA Astrophysics Data System (ADS)
Bourrion, O.; Bosson, G.; Grignon, C.; Bouly, J. L.; Richer, J. P.; Guillaudin, O.; Mayet, F.; Billard, J.; Santos, D.
2011-11-01
Directional detection of non-baryonic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A gaseous micro-TPC matrix, filled with either 3He, CF4 or C4H10 has been developed within the MIMAC project. A dedicated acquisition electronics and a real time track reconstruction software have been developed to monitor a 512 channel prototype. This auto-triggered electronic uses embedded processing to reduce the data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.
Capurso, Daniel; Bengtsson, Henrik; Segal, Mark R.
2016-01-01
The spatial organization of the genome influences cellular function, notably gene regulation. Recent studies have assessed the three-dimensional (3D) co-localization of functional annotations (e.g. centromeres, long terminal repeats) using 3D genome reconstructions from Hi-C (genome-wide chromosome conformation capture) data; however, corresponding assessments for continuous functional genomic data (e.g. chromatin immunoprecipitation-sequencing (ChIP-seq) peak height) are lacking. Here, we demonstrate that applying bump hunting via the patient rule induction method (PRIM) to ChIP-seq data superposed on a Saccharomyces cerevisiae 3D genome reconstruction can discover ‘functional 3D hotspots’, regions in 3-space for which the mean ChIP-seq peak height is significantly elevated. For the transcription factor Swi6, the top hotspot by P-value contains MSB2 and ERG11 – known Swi6 target genes on different chromosomes. We verify this finding in a number of ways. First, this top hotspot is relatively stable under PRIM across parameter settings. Second, this hotspot is among the top hotspots by mean outcome identified by an alternative algorithm, k-Nearest Neighbor (k-NN) regression. Third, the distance between MSB2 and ERG11 is smaller than expected (by resampling) in two other 3D reconstructions generated via different normalization and reconstruction algorithms. This analytic approach can discover functional 3D hotspots and potentially reveal novel regulatory interactions. PMID:26869583
Impact of Level of Details in the 3d Reconstruction of Trees for Microclimate Modeling
NASA Astrophysics Data System (ADS)
Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.
2016-06-01
In the 21st century, urban areas undergo specific climatic conditions like urban heat islands which frequency and intensity increase over the years. Towards the understanding and the monitoring of these conditions, vegetation effects on urban climate are studied. It appears that a natural phenomenon, the evapotranspiration of trees, generates a cooling effect in urban environment. In this work, a 3D microclimate model is used to quantify the evapotranspiration of trees in relation with their architecture, their physiology and the climate. These three characteristics are determined with field measurements and data processing. Based on point clouds acquired with terrestrial laser scanner (TLS), the 3D reconstruction of the tree wood architecture is performed. Then the 3D reconstruction of leaves is carried out from the 3D skeleton of vegetative shoots and allometric statistics. With the aim of extending the simulation on several trees simultaneously, it is necessary to apply the 3D reconstruction process on each tree individually. However, as well for the acquisition as for the processing, the 3D reconstruction approach is time consuming. Mobile laser scanners could provide point clouds in a faster way than static TLS, but this implies a lower point density. Also the processing time could be shortened, but under the assumption that a coarser 3D model is sufficient for the simulation. In this context, the criterion of level of details and accuracy of the tree 3D reconstructed model must be studied. In this paper first tests to assess their impact on the determination of the evapotranspiration are presented.
Chen, Yong; Cai, Jiye; Zhao, Tao; Wang, Chenxi; Dong, Shuo; Luo, Shuqian; Chen, Zheng W.
2010-01-01
The thin sectioning has been widely applied in electron microscopy (EM), and successfully used for an in situ observation of inner ultrastructure of cells. This powerful technique has recently been extended to the research field of atomic force microscopy (AFM). However, there have been no reports describing AFM imaging of serial thin sections and three-dimensional (3-D) reconstruction of cells and their inner structures. In the present study, we used AFM to scan serial thin sections approximately 60nm thick of a mouse embryonic stem (ES) cell, and to observe the in situ inner ultrastructure including cell membrane, cytoplasm, mitochondria, nucleus membrane, and linear chromatin. The high-magnification AFM imaging of single mitochondria clearly demonstrated the outer membrane, inner boundary membrane and cristal membrane of mitochondria in the cellular compartment. Importantly, AFM imaging on six serial thin sections of a single mouse ES cell showed that mitochondria underwent sequential changes in the number, morphology and distribution. These nanoscale images allowed us to perform 3-D surface reconstruction of interested interior structures in cells. Based on the serial in situ images, 3-D models of morphological characteristics, numbers and distributions of interior structures of the single ES cells were validated and reconstructed. Our results suggest that the combined AFM and serial-thin-section technique is useful for the nanoscale imaging and 3-D reconstruction of single cells and their inner structures. This technique may facilitate studies of proliferating and differentiating stages of stem cells or somatic cells at a nanoscale. PMID:15850704
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries
NASA Astrophysics Data System (ADS)
Izadi, Mohammad
In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.
Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging
Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.
2014-01-01
Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083
3D Coronal Magnetic Field Reconstruction Based on Infrared Polarimetric Observations
NASA Astrophysics Data System (ADS)
Kramar, M.; Lin, H.; Tomczyk, S.
2014-12-01
Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal phenomena at all scales. A significant progress has been recently achieved here with deployment of the Coronal Multichannel Polarimeter (CoMP) of the High Altitude Observatory (HAO). The instrument provides polarization measurements of Fe xiii 10747 A forbidden line emission. The observed polarization are the result of a line-of-sight (LOS) integration through a nonuniform temperature, density and magnetic field distribution. In order resolve the LOS problem and utilize this type of data, the vector tomography method has been developed for 3D reconstruction of the coronal magnetic field. The 3D electron density and temperature, needed as additional input, have been reconstructed by tomography method based on STEREO/EUVI data. We will present the 3D coronal magnetic field and associated 3D curl B, density, and temperature resulted from these inversions.
Using of Bezier Interpolation in 3D Reconstruction of Human Femur Bone
NASA Astrophysics Data System (ADS)
Toth-Tascau, Mirela; Pater, Flavius; Stoia, Dan Ioan; Menyhardt, Karoly; Rosu, Serban; Rusu, Lucian; Vigaru, Cosmina
2011-09-01
The paper is focused on image acquisition and processing of CT scans of a human femur bone in order to obtain 3D reconstructions of the human femur. The objective of the presented study was to obtain 3D realistic model of the human femur bone. The reconstructed model provides useful data to the physician but more important are the data and 3D models that can be used for virtual testing of femoral implants and endoprosthesis. Using the B-spline patch a 3D volume model of the human femur bone can be achieved. This model can be easy imported in any CAD system, resulting a virtual femur model witch can be used in FEM analysis.
3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images
NASA Astrophysics Data System (ADS)
Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom
Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.
Reliable Gait Recognition Using 3D Reconstructions and Random Forests - An Anthropometric Approach.
Sandau, Martin; Heimbürger, Rikke V; Jensen, Karl E; Moeslund, Thomas B; Aanaes, Henrik; Alkjaer, Tine; Simonsen, Erik B
2016-05-01
Photogrammetric measurements of bodily dimensions and analysis of gait patterns in CCTV are important tools in forensic investigations but accurate extraction of the measurements are challenging. This study tested whether manual annotation of the joint centers on 3D reconstructions could provide reliable recognition. Sixteen participants performed normal walking where 3D reconstructions were obtained continually. Segment lengths and kinematics from the extremities were manually extracted by eight expert observers. The results showed that all the participants were recognized, assuming the same expert annotated the data. Recognition based on data annotated by different experts was less reliable achieving 72.6% correct recognitions as some parameters were heavily affected by interobserver variability. This study verified that 3D reconstructions are feasible for forensic gait analysis as an improved alternative to conventional CCTV. However, further studies are needed to account for the use of different clothing, field conditions, etc. PMID:27122399
Moriconi, S; Scalco, E; Broggi, S; Avuzzi, B; Valdagni, R; Rizzo, G
2015-08-01
A novel approach for three-dimensional (3D) surface reconstruction of anatomical structures in radiotherapy (RT) is presented. This is obtained from manual cross-sectional contours by combining both image voxel segmentation processing and implicit surface streaming methods using wavelets. 3D meshes reconstructed with the proposed approach are compared to those obtained from traditional triangulation algorithm. Qualitative and quantitative evaluations are performed in terms of mesh quality metrics. Differences in smoothness, detail and accuracy are observed in the comparison, considering three different anatomical districts and several organs at risk in radiotherapy. Overall best performances were recorded for the proposed approach, regardless the complexity of the anatomical structure. This demonstrates the efficacy of the proposed approach for the 3D surface reconstruction in radiotherapy and allows for further specific image analyses using real biomedical data. PMID:26737226
GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.
Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H
2012-09-01
Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC
Full 3-D cluster-based iterative image reconstruction tool for a small animal PET camera
NASA Astrophysics Data System (ADS)
Valastyán, I.; Imrek, J.; Molnár, J.; Novák, D.; Balkay, L.; Emri, M.; Trón, L.; Bükki, T.; Kerek, A.
2007-02-01
Iterative reconstruction methods are commonly used to obtain images with high resolution and good signal-to-noise ratio in nuclear imaging. The aim of this work was to develop a scalable, fast, cluster based, fully 3-D iterative image reconstruction package for our small animal PET camera, the miniPET. The reconstruction package is developed to determine the 3-D radioactivity distribution from list mode type of data sets and it can also simulate noise-free projections of digital phantoms. We separated the system matrix generation and the fully 3-D iterative reconstruction process. As the detector geometry is fixed for a given camera, the system matrix describing this geometry is calculated only once and used for every image reconstruction, making the process much faster. The Poisson and the random noise sensitivity of the ML-EM iterative algorithm were studied for our small animal PET system with the help of the simulation and reconstruction tool. The reconstruction tool has also been tested with data collected by the miniPET from a line and a cylinder shaped phantom and also a rat.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N
2014-06-01
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
Chen, G; Pan, X; Stayman, J; Samei, E
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical
3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2012-01-01
Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079
Manifold Based Optimization for Single-Cell 3D Genome Reconstruction
Collas, Philippe
2015-01-01
The three-dimensional (3D) structure of the genome is important for orchestration of gene expression and cell differentiation. While mapping genomes in 3D has for a long time been elusive, recent adaptations of high-throughput sequencing to chromosome conformation capture (3C) techniques, allows for genome-wide structural characterization for the first time. However, reconstruction of "consensus" 3D genomes from 3C-based data is a challenging problem, since the data are aggregated over millions of cells. Recent single-cell adaptations to the 3C-technique, however, allow for non-aggregated structural assessment of genome structure, but data suffer from sparse and noisy interaction sampling. We present a manifold based optimization (MBO) approach for the reconstruction of 3D genome structure from chromosomal contact data. We show that MBO is able to reconstruct 3D structures based on the chromosomal contacts, imposing fewer structural violations than comparable methods. Additionally, MBO is suitable for efficient high-throughput reconstruction of large systems, such as entire genomes, allowing for comparative studies of genomic structure across cell-lines and different species. PMID:26262780
3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2012-01-01
Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
3D reconstruction on CBCT in the cystic pathology of the jaws
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
The paper presents the image acquisition of Cone Beam Computer Tomography scans of human facial bones and their processing in order to obtain a 3D reconstruction model of the skull. The reconstructed model provides useful data to the physician in situations of maxillary cystic pathology but more important is the data about the relationship of the maxillary cyst with the surrounding anatomical elements. Using the B-splines a 3D volume model of the human facial bones can be achieved. This model can be exported in any CAD system, resulting a virtual model witch can be used in FEM analysis.
NASA Astrophysics Data System (ADS)
González, C. A.; Dávila, A.; Garnica, G.
2007-09-01
Two projection systems that use an LCoS phase modulator are proposed for 3D shape reconstruction. The LCoS is used as an holographic system or as a weak phase projector, both configurations project a set of fringe patterns that are processed by the technique known as temporal phase unwrapping. To minimize the influence of camera sampling, and the speckle noise in the projected fringes, an speckle noise reduction technique is applied to the speckle patterns generated by the holographic optical system. Experiments with 3D shape reconstruction of ophthalmic mold and other testing specimens show the viability of the proposed techniques.
Ribes, Delphine; Parafita, Julia; Charrier, Rémi; Magara, Fulvio; Magistretti, Pierre J; Thiran, Jean-Philippe
2010-01-01
In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool. PMID:21124830
Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Yu, Q.; Helmholz, P.; Belton, D.; West, G.
2014-04-01
The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.
3D face reconstruction from limited images based on differential evolution
NASA Astrophysics Data System (ADS)
Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.
2011-09-01
3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.
Technology Transfer Automated Retrieval System (TEKTRAN)
Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...
Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.
2016-04-01
Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.
Using flow information to support 3D vessel reconstruction from rotational angiography
Waechter, Irina; Bredno, Joerg; Weese, Juergen; Barratt, Dean C.; Hawkes, David J.
2008-07-15
For the assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) morphologic and hemodynamic information about the vessel system. Rotational angiography is routinely used to image the 3D vascular geometry and we have shown previously that rotational subtraction angiography has the potential to also give quantitative information about blood flow. Flow information can be determined when the angiographic sequence shows inflow and possibly outflow of contrast agent. However, a standard volume reconstruction assumes that the vessel tree is uniformly filled with contrast agent during the whole acquisition. If this is not the case, the reconstruction exhibits artifacts. Here, we show how flow information can be used to support the reconstruction of the 3D vessel centerline and radii in this case. Our method uses the fast marching algorithm to determine the order in which voxels are analyzed. For every voxel, the rotational time intensity curve (R-TIC) is determined from the image intensities at the projection points of the current voxel. Next, the bolus arrival time of the contrast agent at the voxel is estimated from the R-TIC. Then, a measure of the intensity and duration of the enhancement is determined, from which a speed value is calculated that steers the propagation of the fast marching algorithm. The results of the fast marching algorithm are used to determine the 3D centerline by backtracking. The 3D radius is reconstructed from 2D radius estimates on the projection images. The proposed method was tested on computer simulated rotational angiography sequences with systematically varied x-ray acquisition, blood flow, and contrast agent injection parameters and on datasets from an experimental setup using an anthropomorphic cerebrovascular phantom. For the computer simulation, the mean absolute error of the 3D centerline and 3D radius estimation was 0.42 and 0.25 mm, respectively. For the experimental datasets, the mean absolute
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. PMID:22727689
3D reconstruction of complex geological bodies: Examples from the Alps
NASA Astrophysics Data System (ADS)
Zanchi, Andrea; Francesca, Salvi; Stefano, Zanchetta; Simone, Sterlacchini; Graziano, Guerra
2009-01-01
Cartographic geological and structural data collected in the field and managed by Geographic Information Systems (GIS) technology can be used for 3D reconstruction of complex geological bodies. Using a link between GIS tools and gOcad, stratigraphic and tectonic surfaces can be reconstructed taking into account any geometrical constraint derived from field observations. Complex surfaces can be reconstructed using large data sets analysed by suitable geometrical techniques. Three main typologies of geometric features and related attributes are exported from a GIS-geodatabase: (1) topographic data as points from a digital elevation model; (2) stratigraphic and tectonic boundaries, and linear features as 2D polylines; (3) structural data as points. After having imported the available information into gOcad, the following steps should be performed: (1) construction of the topographic surface by interpolation of points; (2) 3D mapping of the linear geological boundaries and linear features by vertical projection on the reconstructed topographic surface; (3) definition of geometrical constraints from planar and linear outcrop data; (4) construction of a network of cross-sections based on field observations and geometrical constraints; (5) creation of 3D surfaces, closed volumes and grids from the constructed objects. Three examples of the reconstruction of complex geological bodies from the Italian Alps are presented here. The methodology demonstrates that although only outcrop data were available, 3D modelling has allows the checking of the geometrical consistency of the interpretative 2D sections and of the field geology, through a 3D visualisation of geometrical models. Application of a 3D geometrical model to the case studies can be very useful in geomechanical modelling for slope-stability or resource evaluation.
Reconstruction of 3D ultrasound images based on Cyclic Regularized Savitzky-Golay filters.
Toonkum, Pollakrit; Suwanwela, Nijasri C; Chinrungrueng, Chedsada
2011-02-01
This paper presents a new three-dimensional (3D) ultrasound reconstruction algorithm for generation of 3D images from a series of two-dimensional (2D) B-scans acquired in the mechanical linear scanning framework. Unlike most existing 3D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the Cyclic Regularized Savitzky-Golay (CRSG) filter, is a new variant of the Savitzky-Golay (SG) smoothing filter. The CRSG filter has been improved upon the original SG filter in two respects: First, the cyclic indicator function has been incorporated into the least square cost function to enable the CRSG filter to approximate nonuniformly spaced data of the unobserved image intensities contained in unfilled voxels and reduce speckle noise of the observed image intensities contained in filled voxels. Second, the regularization function has been augmented to the least squares cost function as a mechanism to balance between the degree of speckle reduction and the degree of detail preservation. The CRSG filter has been evaluated and compared with the Voxel Nearest-Neighbor (VNN) interpolation post-processed by the Adaptive Speckle Reduction (ASR) filter, the VNN interpolation post-processed by the Adaptive Weighted Median (AWM) filter, the Distance-Weighted (DW) interpolation, and the Adaptive Distance-Weighted (ADW) interpolation, on reconstructing a synthetic 3D spherical image and a clinical 3D carotid artery bifurcation in the mechanical linear scanning framework. This preliminary evaluation indicates that the CRSG filter is more effective in both speckle reduction and geometric reconstruction of 3D ultrasound images than the other methods. PMID:20696448
NASA Astrophysics Data System (ADS)
Khongsomboon, Khamphong; Hamamoto, Kazuhiko; Kondo, Shozo
3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the thechnique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians.
The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations
NASA Astrophysics Data System (ADS)
Ben Hassen, M. F.; Erhard, K.; Potthast, R.
2006-02-01
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
A new method to combine 3D reconstruction volumes for multiple parallel circular cone beam orbits
Baek, Jongduk; Pelc, Norbert J.
2010-01-01
Purpose: This article presents a new reconstruction method for 3D imaging using a multiple 360° circular orbit cone beam CT system, specifically a way to combine 3D volumes reconstructed with each orbit. The main goal is to improve the noise performance in the combined image while avoiding cone beam artifacts. Methods: The cone beam projection data of each orbit are reconstructed using the FDK algorithm. When at least a portion of the total volume can be reconstructed by more than one source, the proposed combination method combines these overlap regions using weighted averaging in frequency space. The local exactness and the noise performance of the combination method were tested with computer simulations of a Defrise phantom, a FORBILD head phantom, and uniform noise in the raw data. Results: A noiseless simulation showed that the local exactness of the reconstructed volume from the source with the smallest tilt angle was preserved in the combined image. A noise simulation demonstrated that the combination method improved the noise performance compared to a single orbit reconstruction. Conclusions: In CT systems which have overlap volumes that can be reconstructed with data from more than one orbit and in which the spatial frequency content of each reconstruction can be calculated, the proposed method offers improved noise performance while keeping the local exactness of data from the source with the smallest tilt angle. PMID:21089770
Hansis, Eberhard; Schäfer, Dirk; Dössel, Olaf; Grass, Michael
2008-11-01
A 3-D reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular disease, compared to 2-D X-ray angiograms. Besides improved roadmapping, quantitative vessel analysis is possible. Due to the heart's motion, rotational coronary angiography typically provides only 5-10 projections for the reconstruction of each cardiac phase, which leads to a strongly undersampled reconstruction problem. Such an ill-posed problem can be approached with regularized iterative methods. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, the minimization of the mbiL(1) norm of the reconstructed image, favoring spatially sparse images, is a suitable regularization. Additional problems are overlaid background structures and projection truncation, which can be alleviated by background reduction using a morphological top-hat filter. This paper quantitatively evaluates image reconstruction based on these ideas on software phantom data, in terms of reconstructed absorption coefficients and vessel radii. Results for different algorithms and different input data sets are compared. First results for electrocardiogram-gated reconstruction from clinical catheter-based rotational X-ray coronary angiography are presented. Excellent 3-D image quality can be achieved. PMID:18955171
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
Breast mass detection using slice conspicuity in 3D reconstructed digital breast volumes
NASA Astrophysics Data System (ADS)
Kim, Seong Tae; Kim, Dae Hoe; Ro, Yong Man
2014-09-01
In digital breast tomosynthesis, the three dimensional (3D) reconstructed volumes only provide quasi-3D structure information with limited resolution along the depth direction due to insufficient sampling in depth direction and the limited angular range. The limitation could seriously hamper the conventional 3D image analysis techniques for detecting masses because the limited number of projection views causes blurring in the out-of-focus planes. In this paper, we propose a novel mass detection approach using slice conspicuity in the 3D reconstructed digital breast volumes to overcome the above limitation. First, to overcome the limited resolution along the depth direction, we detect regions of interest (ROIs) on each reconstructed slice and separately utilize the depth directional information to combine the ROIs effectively. Furthermore, we measure the blurriness of each slice for resolving the degradation of performance caused by the blur in the out-of-focus plane. Finally, mass features are extracted from the selected in focus slices and analyzed by a support vector machine classifier to reduce the false positives. Comparative experiments have been conducted on a clinical data set. Experimental results demonstrate that the proposed approach outperforms the conventional 3D approach by achieving a high sensitivity with a small number of false positives.
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
A preliminary investigation of 3D preconditioned conjugate gradient reconstruction for cone-beam CT
NASA Astrophysics Data System (ADS)
Fu, Lin; De Man, Bruno; Zeng, Kai; Benson, Thomas M.; Yu, Zhou; Cao, Guangzhi; Thibault, Jean-Baptiste
2012-03-01
Model-based iterative reconstruction (MBIR) methods based on maximum a posteriori (MAP) estimation have been recently introduced to multi-slice CT scanners. The model-based approach has shown promising image quality improvement with reduced radiation dose compared to conventional FBP methods, but the associated high computation cost limits its widespread use in clinical environments. Among the various choices of numerical algorithms to optimize the MAP cost function, simultaneous update methods such as the conjugate gradient (CG) method have a relatively high level of parallelism to take full advantage of a new generation of many-core computing hardware. With proper preconditioning techniques, fast convergence speeds of CG algorithms have been demonstrated in 3D emission and 2D transmission reconstruction. However, 3D transmission reconstruction using preconditioned conjugate gradient (PCG) has not been reported. Additional challenges in applying PCG in 3D CT reconstruction include the large size of clinical CT data, shift-variant and incomplete sampling, and complex regularization schemes to meet the diagnostic standard of image quality. In this paper, we present a ramp-filter based PCG algorithm for 3D CT MBIR. Convergence speeds of algorithms with and without using the preconditioner are compared.
Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software
NASA Astrophysics Data System (ADS)
Roşu, Şerban; Ianeş, Emilia; Roşu, Doina
2010-09-01
This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.
Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee
2015-12-01
3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope. PMID:26498516
Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor
El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice
2015-01-01
In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874
3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies
Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno
2016-01-01
We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628
3D reconstruction of tomographic images applied to largely spaced slices.
Traina, A J; Prado, A H; Bueno, J M
1997-12-01
This paper presents a full reconstruction process of magnetic resonance images. The first step is to bring the acquired data from the frequency domain, using a Fast Fourier Transform algorithm. A Tomographic Image Interpolation is then used to transform a sequence of tomographic slices in an isotropic volume data set, a process also called 3D Reconstruction. This work describes an automatic method whose interpolation stage is based on a previous matching stage using Delaunay Triangulation. The reconstruction approach uses an extrapolation procedure that permits appropriate treatment of the boundaries of the object under analysis. PMID:9555624
Fast and efficient particle reconstruction on a 3D grid using sparsity
NASA Astrophysics Data System (ADS)
Cornic, P.; Champagnat, F.; Cheminet, A.; Leclaire, B.; Le Besnerais, G.
2015-03-01
We propose an approach for efficient localization and intensity reconstruction of particles on a 3D grid based on sparsity principles. The computational complexity of the method is limited by using the particle volume reconstruction paradigm (Champagnat et al. in Meas Sci Technol 25, 2014) and a reduction in the problem dimension. Tests on synthetic and experimental data show that the proposed method leads to more efficient detections and to reconstructions of higher quality than classical tomoPIV approaches on a large range of seeding densities, up to ppp ≈ 0.12.
Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method
Yu, Haiqing; Chen, Zhi; Zhang, Heye; Loong Wong, Kelvin Kian; Chen, Yunmei; Liu, Huafeng
2015-01-01
This paper presents a total variation (TV) regularized reconstruction algorithm for 3D positron emission tomography (PET). The proposed method first employs the Fourier rebinning algorithm (FORE), rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV) based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS). Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF) (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF). PMID:26398232
Automatic 3d Building Reconstruction from a Dense Image Matching Dataset
NASA Astrophysics Data System (ADS)
McClune, Andrew P.; Mills, Jon P.; Miller, Pauline E.; Holland, David A.
2016-06-01
Over the last 20 years the demand for three dimensional (3D) building models has resulted in a vast amount of research being conducted in attempts to automate the extraction and reconstruction of models from airborne sensors. Recent results have shown that current methods tend to favour planar fitting procedures from lidar data, which are able to successfully reconstruct simple roof structures automatically but fail to reconstruct more complex structures or roofs with small artefacts. Current methods have also not fully explored the potential of recent developments in digital photogrammetry. Large format digital aerial cameras can now capture imagery with increased overlap and a higher spatial resolution, increasing the number of pixel correspondences between images. Every pixel in each stereo pair can also now be matched using per-pixel algorithms, which has given rise to the approach known as dense image matching. This paper presents an approach to 3D building reconstruction to try and overcome some of the limitations of planar fitting procedures. Roof vertices, extracted from true-orthophotos using edge detection, are refined and converted to roof corner points. By determining the connection between extracted corner points, a roof plane can be defined as a closed-cycle of points. Presented results demonstrate the potential of this method for the reconstruction of complex 3D building models at CityGML LoD2 specification.
NASA Astrophysics Data System (ADS)
Prause, Guido P. M.; DeJong, Steven C.; McKay, Charles R.; Sonka, Milan
1996-04-01
In this paper, we describe an approach to 3D reconstruction of the coronary tree based on combined use of biplane coronary angiography and intravascular ultrasound (IVUS). Shortly before the start of a constant-speed IVUS pullback, radiopaque dye is injected into the examined coronary tree and the heart is imaged with a calibrated biplane X-ray system. The 3D centerline of the coronary tree is reconstructed from the geometrically corrected biplane angiograms using an automated segmentation method and manual matching of corresponding branching points. The borders of vessel wall and plaque are automatically detected in the acquired pullback images and the IVUS cross sections are mapped perpendicular to the previously reconstructed 3D vessel centerline. In addition, the twist of the IVUS probe due to the curvature of the coronary artery is calculated for a torsion-free catheter and the whole vessel reconstruction is rotationally adjusted using available anatomic landmarks. The accuracy of the biplane reconstruction procedure is validated by means of a left coronary tree phantom. The feasibility of the entire approach is demonstrated in a cadaveric pig heart.
A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision
NASA Astrophysics Data System (ADS)
Saini, Deepika; Kumar, Sanjeev
2015-12-01
In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.
Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner
NASA Astrophysics Data System (ADS)
Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.
2004-10-01
We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.
Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images
Babu, S; Liao, P; Shin, M C; Tsap, L V
2004-04-28
The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.
Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images
NASA Astrophysics Data System (ADS)
Babu, Sabarish; Liao, Pao-Chuan; Shin, Min C.; Tsap, Leonid V.
2006-12-01
The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.
Modifications in SIFT-based 3D reconstruction from image sequence
NASA Astrophysics Data System (ADS)
Wei, Zhenzhong; Ding, Boshen; Wang, Wei
2014-11-01
In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.
3D seismic data reconstruction based on complex-valued curvelet transform in frequency domain
NASA Astrophysics Data System (ADS)
Zhang, Hua; Chen, Xiaohong; Li, Hongxing
2015-02-01
Traditional seismic data sampling must follow the Nyquist Sampling Theorem. However, the field data acquisition may not meet the sampling criteria due to missing traces or limits in exploration cost, causing a prestack data reconstruction problem. Recently researchers have proposed many useful methods to regularize the seismic data. In this paper, a 3D seismic data reconstruction method based on the Projections Onto Convex Sets (POCS) algorithm and a complex-valued curvelet transform (CCT) has been introduced in the frequency domain. In order to improve reconstruction efficiency and reduce the computation time, the seismic data are transformed from the t-x-y domain to the f-x-y domain and the data reconstruction is processed for every frequency slice during the reconstruction process. The selection threshold parameter is important for reconstruction efficiency for each iteration, therefore an exponential square root decreased (ESRD) threshold is proposed. The experimental results show that the ESRD threshold can greatly reduce iterations and improve reconstruction efficiency compared to the other thresholds for the same reconstruction result. We also analyze the antinoise ability of the CCT-based POCS reconstruction method. The example studies on synthetic and real marine seismic data showed that our proposed method is more efficient and applicable.
3D-printed supercapacitor-powered electrochemiluminescent protein immunoarray.
Kadimisetty, Karteek; Mosa, Islam M; Malla, Spundana; Satterwhite-Warden, Jennifer E; Kuhns, Tyler M; Faria, Ronaldo C; Lee, Norman H; Rusling, James F
2016-03-15
Herein we report a low cost, sensitive, supercapacitor-powered electrochemiluminescent (ECL) protein immunoarray fabricated by an inexpensive 3-dimensional (3D) printer. The immunosensor detects three cancer biomarker proteins in serum within 35 min. The 3D-printed device employs hand screen printed carbon sensors with gravity flow for sample/reagent delivery and washing. Prostate cancer biomarker proteins, prostate specific antigen (PSA), prostate specific membrane antigen (PSMA) and platelet factor-4 (PF-4) in serum were captured on the antibody-coated carbon sensors followed by delivery of detection-antibody-coated Ru(bpy)3(2+) (RuBPY)-doped silica nanoparticles in a sandwich immunoassay. ECL light was initiated from RuBPY in the silica nanoparticles by electrochemical oxidation with tripropylamine (TPrA) co-reactant using supercapacitor power and ECL was captured with a CCD camera. The supercapacitor was rapidly photo-recharged between assays using an inexpensive solar cell. Detection limits were 300-500f gmL(-1) for the 3 proteins in undiluted calf serum. Assays of 6 prostate cancer patient serum samples gave good correlation with conventional single protein ELISAs. This technology could provide sensitive onsite cancer diagnostic tests in resource-limited settings with the need for only moderate-level training. PMID:26406460
Unbiased contaminant removal for 3D galaxy power spectrum measurements
NASA Astrophysics Data System (ADS)
Kalus, B.; Percival, W. J.; Bacon, D. J.; Samushia, L.
2016-08-01
We assess and develop techniques to remove contaminants when calculating the 3D galaxy power spectrum. We separate the process into three separate stages: (i) removing the contaminant signal, (ii) estimating the uncontaminated cosmological power spectrum, (iii) debiasing the resulting estimates. For (i), we show that removing the best-fit contaminant (mode subtraction), and setting the contaminated components of the covariance to be infinite (mode deprojection) are mathematically equivalent. For (ii), performing a Quadratic Maximum Likelihood (QML) estimate after mode deprojection gives an optimal unbiased solution, although it requires the manipulation of large N_mode^2 matrices (Nmode being the total number of modes), which is unfeasible for recent 3D galaxy surveys. Measuring a binned average of the modes for (ii) as proposed by Feldman, Kaiser & Peacock (1994, FKP) is faster and simpler, but is sub-optimal and gives rise to a biased solution. We present a method to debias the resulting FKP measurements that does not require any large matrix calculations. We argue that the sub-optimality of the FKP estimator compared with the QML estimator, caused by contaminants is less severe than that commonly ignored due to the survey window.
Orbital Wall Reconstruction with Two-Piece Puzzle 3D Printed Implants: Technical Note.
Mommaerts, Maurice Y; Büttner, Michael; Vercruysse, Herman; Wauters, Lauri; Beerens, Maikel
2016-03-01
The purpose of this article is to describe a technique for secondary reconstruction of traumatic orbital wall defects using titanium implants that act as three-dimensional (3D) puzzle pieces. We present three cases of large defect reconstruction using implants produced by Xilloc Medical B.V. (Maastricht, the Netherlands) with a 3D printer manufactured by LayerWise (3D Systems; Heverlee, Belgium), and designed using the biomedical engineering software programs ProPlan and 3-Matic (Materialise, Heverlee, Belgium). The smaller size of the implants allowed sequential implantation for the reconstruction of extensive two-wall defects via a limited transconjunctival incision. The precise fit of the implants with regard to the surrounding ledges and each other was confirmed by intraoperative 3D imaging (Mobile C-arm Systems B.V. Pulsera, Philips Medical Systems, Eindhoven, the Netherlands). The patients showed near-complete restoration of orbital volume and ocular motility. However, challenges remain, including traumatic fat atrophy and fibrosis. PMID:26889349
Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data
NASA Astrophysics Data System (ADS)
Huai, J.; Zhang, Y.; Yilmaz, A.
2015-08-01
Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.
Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-01-01
This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758
Experimentation of structured light and stereo vision for underwater 3D reconstruction
NASA Astrophysics Data System (ADS)
Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A. V.
Current research on underwater 3D imaging methods is mainly addressing long range applications like seafloor mapping or surveys of archeological sites and shipwrecks. Recently, there is an increasing need for more accessible and precise close-range 3D acquisition technologies in some application fields like, for example, monitoring the growth of coral reefs or reconstructing underwater archaeological pieces that in most cases cannot be recovered from the seabed. This paper presents the first results of a research project that aims to investigate the possibility of using active optical techniques for the whole-field 3D reconstructions in an underwater environment. In this work we have tested an optical technique, frequently used for in air acquisition, based on the projection of structured lighting patterns acquired by a stereo vision system. We describe the experimental setup used for the underwater tests, which were conducted in a water tank with different turbidity conditions. The tests have evidenced that the quality of 3D reconstruction is acceptable even with high turbidity values, despite the heavy presence of scattering and absorption effects.
Flexible 3D reconstruction method based on phase-matching in multi-sensor system.
Wu, Qingyang; Zhang, Baichun; Huang, Jinhui; Wu, Zejun; Zeng, Zeng
2016-04-01
Considering the measuring range limitation of a single sensor system, multi-sensor system has become essential in obtaining complete image information of the object in the field of 3D image reconstruction. However, for the traditional multi-sensors worked independently in its system, there was some point in calibrating each sensor system separately. And the calibration between all single sensor systems was complicated and required a long time. In this paper, we present a flexible 3D reconstruction method based on phase-matching in multi-sensor system. While calibrating each sensor, it realizes the data registration of multi-sensor system in a unified coordinate system simultaneously. After all sensors are calibrated, the whole 3D image data directly exist in the unified coordinate system, and there is no need to calibrate the positions between sensors any more. Experimental results prove that the method is simple in operation, accurate in measurement, and fast in 3D image reconstruction. PMID:27137020
NASA Astrophysics Data System (ADS)
Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.
2016-02-01
Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.
Real-Time 3d Reconstruction from Images Taken from AN Uav
NASA Astrophysics Data System (ADS)
Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.
2015-08-01
We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.
Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.
Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A
2015-08-01
Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. PMID:26037323
Interactive Retro-Deformation of Terrain for Reconstructing 3D Fault Displacements.
Westerteiger, R; Compton, T; Bernadin, T; Cowgill, E; Gwinner, K; Hamann, B; Gerndt, A; Hagen, H
2012-12-01
Planetary topography is the result of complex interactions between geological processes, of which faulting is a prominent component. Surface-rupturing earthquakes cut and move landforms which develop across active faults, producing characteristic surface displacements across the fault. Geometric models of faults and their associated surface displacements are commonly applied to reconstruct these offsets to enable interpretation of the observed topography. However, current 2D techniques are limited in their capability to convey both the three-dimensional kinematics of faulting and the incremental sequence of events required by a given reconstruction. Here we present a real-time system for interactive retro-deformation of faulted topography to enable reconstruction of fault displacement within a high-resolution (sub 1m/pixel) 3D terrain visualization. We employ geometry shaders on the GPU to intersect the surface mesh with fault-segments interactively specified by the user and transform the resulting surface blocks in realtime according to a kinematic model of fault motion. Our method facilitates a human-in-the-loop approach to reconstruction of fault displacements by providing instant visual feedback while exploring the parameter space. Thus, scientists can evaluate the validity of traditional point-to-point reconstructions by visually examining a smooth interpolation of the displacement in 3D. We show the efficacy of our approach by using it to reconstruct segments of the San Andreas fault, California as well as a graben structure in the Noctis Labyrinthus region on Mars. PMID:26357128
Moriya, Toshio; Acar, Erman; Cheng, R Holland; Ruotsalainen, Ulla
2015-09-01
In the single particle reconstruction, the initial 3D structure often suffers from the limited angular sampling artifact. Selecting 2D class averages of particle images generally improves the accuracy and efficiency of the reference-free 3D angle estimation, but causes an insufficient angular sampling to fill the information of the target object in the 3D frequency space. Similarly, the initial 3D structure by the random-conical tilt reconstruction has the well-known "missing cone" artifact. Here, we attempted to solve the limited angular sampling problem by sequentially applying maximum a posteriori estimate with expectation maximization algorithm (sMAP-EM). Using both simulated and experimental cryo-electron microscope images, the sMAP-EM was compared to the direct Fourier method on the basis of reconstruction error and resolution. To establish selection criteria of the final regularization weight for the sMAP-EM, the effects of noise level and sampling sparseness on the reconstructions were examined with evenly distributed sampling simulations. The frequency information filled in the missing cone of the conical tilt sampling simulations was assessed by developing new quantitative measurements. All the results of visual and numerical evaluations showed the sMAP-EM performed better than the direct Fourier method, regardless of the sampling method, noise level, and sampling sparseness. Furthermore, the frequency domain analysis demonstrated that the sMAP-EM can fill the meaningful information in the unmeasured angular space without detailed a priori knowledge of the objects. The current research demonstrated that the sMAP-EM has a high potential to facilitate the determination of 3D protein structures at near atomic-resolution. PMID:26193484
Automatic system for 3D reconstruction of the chick eye based on digital photographs.
Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L
2012-01-01
The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA. PMID:21181572
Dense point-cloud creation using superresolution for a monocular 3D reconstruction system
NASA Astrophysics Data System (ADS)
Diskin, Yakov; Asari, Vijayan K.
2012-05-01
We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.
Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae
2012-01-01
Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454
Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae
2012-01-01
Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454
Detection and 3D reconstruction of traffic signs from multiple view color images
NASA Astrophysics Data System (ADS)
Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno
2013-03-01
3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for
NASA Astrophysics Data System (ADS)
Zapiór, Maciej; Martínez-Gómez, David
2016-02-01
Based on the data collected by the Vacuum Tower Telescope located in the Teide Observatory in the Canary Islands, we analyzed the three-dimensional (3D) motion of so-called knots in a solar prominence of 2014 June 9. Trajectories of seven knots were reconstructed, giving information of the 3D geometry of the magnetic field. Helical motion was detected. From the equipartition principle, we estimated the lower limit of the magnetic field in the prominence to ≈1-3 G and from the Ampère’s law the lower limit of the electric current to ≈1.2 × 109 A.
A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.
Xu, Yiwen; Pickering, J Geoffrey; Nong, Zengxuan; Gibson, Eli; Arpino, John-Michael; Yin, Hao; Ward, Aaron D
2015-01-01
Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision) were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error), as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections). Accumulated error measures were lower (p < 0.01) for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue reconstructions for
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.
1996-02-01
We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017
GlaRe, a GIS tool to reconstruct the 3D surface of palaeoglaciers
NASA Astrophysics Data System (ADS)
Pellitero, Ramón; Rea, Brice R.; Spagnolo, Matteo; Bakke, Jostein; Ivy-Ochs, Susan; Frew, Craig R.; Hughes, Philip; Ribolini, Adriano; Lukas, Sven; Renssen, Hans
2016-09-01
Glacier reconstructions are widely used in palaeoclimatic studies and this paper presents a new semi-automated method for generating glacier reconstructions: GlaRe, is a toolbox coded in Python and operating in ArcGIS. This toolbox provides tools to generate the ice thickness from the bed topography along a palaeoglacier flowline applying the standard flow law for ice, and generates the 3D surface of the palaeoglacier using multiple interpolation methods. The toolbox performance has been evaluated using two extant glaciers, an icefield and a cirque/valley glacier from which the subglacial topography is known, using the basic reconstruction routine in GlaRe. Results in terms of ice surface, ice extent and equilibrium line altitude show excellent agreement that confirms the robustness of this procedure in the reconstruction of palaeoglaciers from glacial landforms such as frontal moraines.
Automated Reconstruction of Walls from Airborne LIDAR Data for Complete 3d Building Modelling
NASA Astrophysics Data System (ADS)
He, Y.; Zhang, C.; Awrangjeb, M.; Fraser, C. S.
2012-07-01
Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne Light Detection and Ranging (LIDAR) data with increasing point density and accuracy has been recognized as a valuable source for automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis, and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.
NASA Astrophysics Data System (ADS)
Gómez-Gutiérrez, Álvaro; Susanne, Schnabel; Conoscenti, Christian; Caraballo-Arias, Nathalie A.; Ferro, Vito; di Stefano, Constanza; Juan de Sanjosé, José; Berenguer-Sempere, Fernando; de Matías, Javier
2014-05-01
Recent developments made in tri-dimensional photo-reconstruction techniques (3D-PR), such as the use of Structure from Motion (SfM) and MultiView Stereo (MVS) techniques together, have allowed obtaining high resolution 3D point clouds. In order to achieve final point clouds with these techniques, only oblique images from consumer un-calibrated and non-metric cameras are needed. Here, these techniques are used in order to measure, monitor and quantify geomorphological features and processes. Three different applications through a range of scales and landforms are presented here. Firstly, five small gully headcuts located in a small catchment in SW Spain were monitored with the aim of estimating headcut retreat rates. During this field work, 3D models obtained by means of a Terrestrial Laser Scanner (TLS) were captured and used as benchmarks to analyze 3D-PR method accuracy. Results of this analysis showed centimeter-level accuracies with average distances between the 3D-PR model and the TLS model ranging from 0.009 to 0.025 m. Estimated soil loss ranged from -0.246 m3 to 0.114 m3 for a wet period (289 mm) of 54 days in 2013. Secondly, a calanchi type badland in Sicily (Italy) was photo-reconstructed and the quality of the 3D-PR model was analyzed using a Digital Elevation Model produced by classic digital photogrammetry with photos captured by an Unmanned Aerial Vehicle (UAV). In this case, sub-meter calculated accuracies (0.30) showed that it is possible to describe badland morphology using 3D-PR models but it is not feasible to use these models to quantify annual rates of soil erosion in badlands (10 mm eroded per year). Finally, a high-resolution model of the Veleta rock glacier (in SE Spain) was elaborated with 3D-PR techniques and compared with a 3D model obtained by means of a TLS. Results indicated that 3D-PR method can be applied to the micro-scale study of glacier morphologies and processes with average distances to the TLS point cloud of 0.21 m.
Image-based reconstruction of 3D myocardial infarct geometry for patient specific applications
NASA Astrophysics Data System (ADS)
Ukwatta, Eranga; Rajchl, Martin; White, James; Pashakhanloo, Farhad; Herzka, Daniel A.; McVeigh, Elliot; Lardo, Albert C.; Trayanova, Natalia; Vadakkumpadan, Fijoy
2015-03-01
Accurate reconstruction of the three-dimensional (3D) geometry of a myocardial infarct from two-dimensional (2D) multi-slice image sequences has important applications in the clinical evaluation and treatment of patients with ischemic cardiomyopathy. However, this reconstruction is challenging because the resolution of common clinical scans used to acquire infarct structure, such as short-axis, late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, is low, especially in the out-of-plane direction. In this study, we propose a novel technique to reconstruct the 3D infarct geometry from low resolution clinical images. Our methodology is based on a function called logarithm of odds (LogOdds), which allows the broader class of linear combinations in the LogOdds vector space as opposed to being limited to only a convex combination in the binary label space. To assess the efficacy of the method, we used high-resolution LGE-CMR images of 36 human hearts in vivo, and 3 canine hearts ex vivo. The infarct was manually segmented in each slice of the acquired images, and the manually segmented data were downsampled to clinical resolution. The developed method was then applied to the downsampled image slices, and the resulting reconstructions were compared with the manually segmented data. Several existing reconstruction techniques were also implemented, and compared with the proposed method. The results show that the LogOdds method significantly outperforms all the other tested methods in terms of region overlap.
3D reconstruction of carbon nanotube networks from neutron scattering experiments
NASA Astrophysics Data System (ADS)
Mahdavi, Mostafa; Baniassadi, Majid; Baghani, Mostafa; Dadmun, Mark; Tehrani, Mehran
2015-09-01
Structure reconstruction from statistical descriptors, such as scattering data obtained using x-rays or neutrons, is essential in understanding various properties of nanocomposites. Scattering based reconstruction can provide a realistic model, over various length scales, that can be used for numerical simulations. In this study, 3D reconstruction of a highly loaded carbon nanotube (CNT)-conducting polymer system based on small and ultra-small angle neutron scattering (SANS and USANS, respectively) data was performed. These light-weight and flexible materials have recently shown great promise for high-performance thermoelectric energy conversion, and their further improvement requires a thorough understanding of their structure-property relationships. The first step in achieving such understanding is to generate models that contain the hierarchy of CNT networks over nano and micron scales. The studied system is a single walled carbon nanotube (SWCNT)/poly (3,4-ethylenedioxythiophene):poly (styrene sulfonate) (PEDOT:PSS). SANS and USANS patterns of the different samples containing 10, 30, and 50 wt% SWCNTs were measured. These curves were then utilized to calculate statistical two-point correlation functions of the nanostructure. These functions along with the geometrical information extracted from SANS data and scanning electron microscopy images were used to reconstruct a representative volume element (RVE) nanostructure. Generated RVEs can be used for simulations of various mechanical and physical properties. This work, therefore, introduces a framework for the reconstruction of 3D RVEs of high volume faction nanocomposites containing high aspect ratio fillers from scattering experiments.
3D Alternating Direction TV-Based Cone-Beam CT Reconstruction with Efficient GPU Implementation
Cai, Ailong; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Guan, Min; Li, Jianxin
2014-01-01
Iterative image reconstruction (IIR) with sparsity-exploiting methods, such as total variation (TV) minimization, claims potentially large reductions in sampling requirements. However, the computation complexity becomes a heavy burden, especially in 3D reconstruction situations. In order to improve the performance for iterative reconstruction, an efficient IIR algorithm for cone-beam computed tomography (CBCT) with GPU implementation has been proposed in this paper. In the first place, an algorithm based on alternating direction total variation using local linearization and proximity technique is proposed for CBCT reconstruction. The applied proximal technique avoids the horrible pseudoinverse computation of big matrix which makes the proposed algorithm applicable and efficient for CBCT imaging. The iteration for this algorithm is simple but convergent. The simulation and real CT data reconstruction results indicate that the proposed algorithm is both fast and accurate. The GPU implementation shows an excellent acceleration ratio of more than 100 compared with CPU computation without losing numerical accuracy. The runtime for the new 3D algorithm is about 6.8 seconds per loop with the image size of 256 × 256 × 256 and 36 projections of the size of 512 × 512. PMID:25045400
Characterizing heterogeneity among virus particles by stochastic 3D signal reconstruction
NASA Astrophysics Data System (ADS)
Xu, Nan; Gong, Yunye; Wang, Qiu; Zheng, Yili; Doerschuk, Peter C.
2015-09-01
In single-particle cryo electron microscopy, many electron microscope images each of a single instance of a biological particle such as a virus or a ribosome are measured and the 3-D electron scattering intensity of the particle is reconstructed by computation. Because each instance of the particle is imaged separately, it should be possible to characterize the heterogeneity of the different instances of the particle as well as a nominal reconstruction of the particle. In this paper, such an algorithm is described and demonstrated on the bacteriophage Hong Kong 97. The algorithm is a statistical maximum likelihood estimator computed by an expectation maximization algorithm implemented in Matlab software.
Li, Fan; Chenoune, Yasmina; Ouenniche, Meriem; Blanc, Raphaël; Petit, Eric
2014-01-01
Diagnosis and computer-guided therapy of cerebral Arterio-Venous Malformations (AVM) require an accurate understanding of the cerebral vascular network both from structural and biomechanical point of view. We propose to obtain such information by analyzing three Dimensional Rotational Angiography (3DRA) images. In this paper, we describe a two-step process allowing 1) the 3D automatic segmentation of cerebral vessels from 3DRA images using a region-growing based algorithm and 2) the reconstruction of the segmented vessels using the 3D constrained Delaunay Triangulation method. The proposed algorithm was successfully applied to reconstruct cerebral blood vessels from ten datasets of 3DRA images. This software allows the neuroradiologist to separately analyze cerebral vessels for pre-operative interventions planning and therapeutic decision making. PMID:25571245
Hollow Cone Electron Imaging for Single Particle 3D Reconstruction of Proteins
NASA Astrophysics Data System (ADS)
Tsai, Chun-Ying; Chang, Yuan-Chih; Lobato, Ivan; van Dyck, Dirk; Chen, Fu-Rong
2016-06-01
The main bottlenecks for high-resolution biological imaging in electron microscopy are radiation sensitivity and low contrast. The phase contrast at low spatial frequencies can be enhanced by using a large defocus but this strongly reduces the resolution. Recently, phase plates have been developed to enhance the contrast at small defocus but electrical charging remains a problem. Single particle cryo-electron microscopy is mostly used to minimize the radiation damage and to enhance the resolution of the 3D reconstructions but it requires averaging images of a massive number of individual particles. Here we present a new route to achieve the same goals by hollow cone dark field imaging using thermal diffuse scattered electrons giving about a 4 times contrast increase as compared to bright field imaging. We demonstrate the 3D reconstruction of a stained GroEL particle can yield about 13.5 Å resolution but using a strongly reduced number of images.
Hollow Cone Electron Imaging for Single Particle 3D Reconstruction of Proteins.
Tsai, Chun-Ying; Chang, Yuan-Chih; Lobato, Ivan; Van Dyck, Dirk; Chen, Fu-Rong
2016-01-01
The main bottlenecks for high-resolution biological imaging in electron microscopy are radiation sensitivity and low contrast. The phase contrast at low spatial frequencies can be enhanced by using a large defocus but this strongly reduces the resolution. Recently, phase plates have been developed to enhance the contrast at small defocus but electrical charging remains a problem. Single particle cryo-electron microscopy is mostly used to minimize the radiation damage and to enhance the resolution of the 3D reconstructions but it requires averaging images of a massive number of individual particles. Here we present a new route to achieve the same goals by hollow cone dark field imaging using thermal diffuse scattered electrons giving about a 4 times contrast increase as compared to bright field imaging. We demonstrate the 3D reconstruction of a stained GroEL particle can yield about 13.5 Å resolution but using a strongly reduced number of images. PMID:27292544
DSA volumetric 3D reconstructions of intracranial aneurysms: A pictorial essay
Cieściński, Jakub; Serafin, Zbigniew; Strześniewski, Piotr; Lasek, Władysław; Beuth, Wojciech
2012-01-01
Summary A gold standard of cerebral vessel imaging remains the digital subtraction angiography (DSA) performed in three projections. However, in specific clinical cases, many additional projections are required, or a complete visualization of a lesion may even be impossible with 2D angiography. Three-dimensional (3D) reconstructions of rotational angiography were reported to improve the performance of DSA significantly. In this pictorial essay, specific applications of this technique are presented in the management of intracranial aneurysms, including: preoperative aneurysm evaluation, intraoperative imaging, and follow-up. Volumetric reconstructions of 3D DSA are a valuable tool for cerebral vessels imaging. They play a vital role in the assessment of intracranial aneurysms, especially in evaluation of the aneurysm neck and the aneurysm recanalization. PMID:22844309
Hollow Cone Electron Imaging for Single Particle 3D Reconstruction of Proteins
Tsai, Chun-Ying; Chang, Yuan-Chih; Lobato, Ivan; Van Dyck, Dirk; Chen, Fu-Rong
2016-01-01
The main bottlenecks for high-resolution biological imaging in electron microscopy are radiation sensitivity and low contrast. The phase contrast at low spatial frequencies can be enhanced by using a large defocus but this strongly reduces the resolution. Recently, phase plates have been developed to enhance the contrast at small defocus but electrical charging remains a problem. Single particle cryo-electron microscopy is mostly used to minimize the radiation damage and to enhance the resolution of the 3D reconstructions but it requires averaging images of a massive number of individual particles. Here we present a new route to achieve the same goals by hollow cone dark field imaging using thermal diffuse scattered electrons giving about a 4 times contrast increase as compared to bright field imaging. We demonstrate the 3D reconstruction of a stained GroEL particle can yield about 13.5 Å resolution but using a strongly reduced number of images. PMID:27292544
Reconstruction of 3D ion beam micro-tomography data for applications in Cell Biology
NASA Astrophysics Data System (ADS)
Habchi, C.; Nguyen, D. T.; Barberet, Ph.; Incerti, S.; Moretto, Ph.; Sakellariou, A.; Seznec, H.
2009-06-01
The DISRA (Discrete Image Space Reconstruction Algorithm) reconstruction code, created by A. Sakellariou, was conceived for the ideal case of complete three-dimensional (3D) PIXET (Particle Induced X-ray Emission Tomography) data. This implies two major difficulties for biological samples: first, the long duration of such experiments and second, the subsequent damage that occurs on such fragile specimens. For this reason, the DISRA code was extended at CENBG in order to probe isolated PIXET slices, taking into account the sample structure and mass density provided by 3D STIMT (Scanning Transmission Ion Microscopy Tomography) in the volume of interest. This modified version was tested on a phantom sample and first results on human cancer cells are also presented.
3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.
Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon
2014-12-01
Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images. PMID:24965564
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-01-01
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate. PMID:25375758
3D TEM reconstruction and segmentation process of laminar bio-nanocomposites
Iturrondobeitia, M. Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.
2015-03-30
The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V{sub clay} (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite.
NASA Astrophysics Data System (ADS)
Dahlke, D.; Linkiewicz, M.
2016-06-01
This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.
Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach
de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José
2015-01-01
This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796
Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.; Karpina, M.
2016-06-01
Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.
3D volume reconstruction of a mouse brain histological sections using warp filtering
Ju, Tao; Warren, Joe; Carson, James P.; Bello, Musodiq; Kakadiaris, Ioannis; Chiu, Wah; Thaller, Christina; Eichele, Gregor
2006-09-30
Sectioning tissues for optical microscopy often introduces upon the resulting sections distortions that make 3D reconstruction difficult. Here we present an automatic method for producing a smooth 3D volume from distorted 2D sections in the absence of any undistorted references. The method is based on pairwise elastic image warps between successive tissue sections, which can be computed by 2D image registration. Using a Gaussian filter, an average warp is computed for each section from the pairwise warps in a group of its neighboring sections. The average warps deform each section to match its neighboring sections, thus creating a smooth volume where corresponding features on successive sections lie close to each other. The proposed method can be used with any existing 2D image registration method for 3D reconstruction. In particular, we present a novel image warping algorithm based on dynamic programming that extends Dynamic Time Warping in 1D speech recognition to compute pairwise warps between high-resolution 2D images. The warping algorithm efficiently computes a restricted class of 2D local deformations that are characteristic between successive tissue sections. Finally, a validation framework is proposed and applied to evaluate the quality of reconstruction using both real sections and a synthetic volume.
A complete system for 3D reconstruction of roots for phenotypic analysis.
Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J
2015-01-01
Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis. PMID:25381112
ERIC Educational Resources Information Center
Martin, Charys M.; Roach, Victoria A.; Nguyen, Ngan; Rice, Charles L.; Wilson, Timothy D.
2013-01-01
The use of three-dimensional (3D) models for education, pre-operative assessment, presurgical planning, and measurement have become more prevalent. With the increase in prevalence of 3D models there has also been an increase in 3D reconstructive software programs that are used to create these models. These software programs differ in…
Kumta, Samir; Kumta, Monica; Jain, Leena; Purohit, Shrirang; Ummul, Rani
2015-01-01
Introduction: Replication of the exact three-dimensional (3D) structure of the maxilla and mandible is now a priority whilst attempting reconstruction of these bones to attain a complete functional and aesthetic rehabilitation. We hereby present the process of rapid prototyping using stereolithography to produce templates for modelling bone grafts and implants for maxilla/mandible reconstructions, its applications in tumour/trauma, and outcomes for primary and secondary reconstruction. Materials and Methods: Stereolithographic template-assisted reconstruction was used on 11 patients for the reconstruction of the mandible/maxilla primarily following tumour excision and secondarily for the realignment of post-traumatic malunited fractures or deformity corrections. Data obtained from the computed tomography (CT) scans with 1-mm resolution were converted into a computer-aided design (CAD) using the CT Digital Imaging and Communications in Medicine (DICOM) data. Once a CAD model was constructed, it was converted into a stereolithographic format and then processed by the rapid prototyping technology to produce the physical anatomical model using a resin. This resin model replicates the native mandible, which can be thus used off table as a guide for modelling the bone grafts. Discussion: This conversion of two-dimensional (2D) data from CT scan into 3D models is a very precise guide to shaping the bone grafts. Further, this CAD can reconstruct the defective half of the mandible using the mirror image principle, and the normal anatomical model can be created to aid secondary reconstructions. Conclusion: This novel approach allows a precise translation of the treatment plan directly to the surgical field. It is also an important teaching tool for implant moulding and fixation, and helps in patient counselling. PMID:26933279
Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk
2015-03-15
Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.