High performance computing approaches for 3D reconstruction of complex biological specimens.
da Silva, M Laura; Roca-Piera, Javier; Fernández, José-Jesús
2010-01-01
Knowledge of the structure of specimens is crucial to determine the role that they play in cellular and molecular biology. To yield the three-dimensional (3D) reconstruction by means of tomographic reconstruction algorithms, we need the use of large projection images and high processing time. Therefore, we propose the use of the high performance computing (HPC) to cope with the huge computational demands of this problem. We have implemented a HPC strategy where the distribution of tasks follows the master-slave paradigm. The master processor distributes a slab of slices, a piece of the final 3D structure to reconstruct, among the slave processors and receives reconstructed slices of the volume. We have evaluated the performance of our HPC approach using different sizes of the slab. We have observed that it is possible to find out an optimal size of the slab for the number of processor used that minimize communications time while maintaining a reasonable grain of parallelism to be exploited by the set of processors. PMID:20865517
Analysis of bite marks in foodstuffs by computer tomography (cone beam CT)--3D reconstruction.
Marques, Jeidson; Musse, Jamilly; Caetano, Catarina; Corte-Real, Francisco; Corte-Real, Ana Teresa
2013-12-01
The use of three-dimensional (3D) analysis of forensic evidence is highlighted in comparison with traditional methods. This three-dimensional analysis is based on the registration of the surface from a bitten object. The authors propose to use Cone Beam Computed Tomography (CBCT), which is used in dental practice, in order to study the surface and interior of bitten objects and dental casts of suspects. In this study, CBCT is applied to the analysis of bite marks in foodstuffs, which may be found in a forensic case scenario. 6 different types of foodstuffs were used: chocolate, cheese, apple, chewing gum, pizza and tart (flaky pastry and custard). The food was bitten into and dental casts of the possible suspects were made. The dental casts and bitten objects were registered using an x-ray source and the CBCT equipment iCAT® (Pennsylvania, EUA). The software InVivo5® (Anatomage Inc, EUA) was used to visualize and analyze the tomographic slices and 3D reconstructions of the objects. For each material an estimate of its density was assessed by two methods: HU values and specific gravity. All the used materials were successfully reconstructed as good quality 3D images. The relative densities of the materials in study were compared. Amongst the foodstuffs, the chocolate had the highest density (median value 100.5 HU and 1,36 g/cm(3)), while the pizza showed to have the lowest (median value -775 HU and 0,39 g/cm(3)), on both scales. Through tomographic slices and three-dimensional reconstructions it was possible to perform the metric analysis of the bite marks in all the foodstuffs, except for the pizza. These measurements could also be obtained from the dental casts. The depth of the bite mark was also successfully determined in all the foodstuffs except for the pizza. Cone Beam Computed Tomography has the potential to become an important tool for forensic sciences, namely for the registration and analysis of bite marks in foodstuffs that may be found in a crime
Kressler, Bryan; Spincemaille, Pascal; Prince, Martin R; Wang, Yi
2006-09-01
Time-resolved 3D MRI with high spatial and temporal resolution can be achieved using spiral sampling and sliding-window reconstruction. Image reconstruction is computationally intensive because of the need for data regridding, a large number of temporal phases, and multiple RF receiver coils. Inhomogeneity blurring correction for spiral sampling further increases the computational work load by an order of magnitude, hindering the clinical utility of spiral trajectories. In this work the reconstruction time is reduced by a factor of >40 compared to reconstruction using a single processor. This is achieved by using a cluster of 32 commercial off-the-shelf computers, commodity networking hardware, and readily available software. The reconstruction system is demonstrated for time-resolved spiral contrast-enhanced (CE) peripheral MR angiography (MRA), and a reduction of reconstruction time from 80 min to 1.8 min is achieved. PMID:16892189
NASA Astrophysics Data System (ADS)
Huerta, N. J.; Murphy, M. A.; Natarajan, V.; Weber, G.; Hamann, B.; Sumner, D. Y.
2005-12-01
Three-dimensional visualization of intricate microbial structures in rocks is essential to understand the growth of ancient microbial communities. We have imaged and reconstructed the three-dimensional morphology of 2.5-2.6 billion year old intricate microbialites preserved in carbonate using both serial sectioning and neutron computed tomography (NCT). Reconstruction techniques vary with data type and sample preservation. NCT is a non-destructive technique for imaging organic-containing samples with sufficiently high hydrogen concentrations. The resolution of reconstruction is finer than 500 microns. We reconstructed microbialites preserved as organic inclusions in calcite using NCT. Reconstructions are interpreted using volume rendering, segmentation, and an interactive Matlab/visualization environment. Visualizations demonstrate the intricacy of the structures. Noise currently limits automatic growth surface extraction, but growth of structures can be qualitatively evaluated. One of the largest obstacles to date is efficient manipulation of large data sets. Our current visualization approach always renders the supplied data set at full resolution, which requires down-sampling of datasets larger than 256 pixels3 (acquired volume data consists of up to 2048 pixels3) to isolate regions of interest and extract important features. We are exploring the use of multi-resolution techniques that store a dataset at different levels of detail and chose an appropriate resolution during user-interaction. Such an approach will allow us to visualize raw data at full resolution. Serial sectioning and scanning successive horizons provides reconstructions of samples lacking sufficient hydrogen for NCT. This technique destroys the sample and has a lower resolution than NCT. However, intricate networks of microbial laminae surrounded by cement-filled voids can be characterized using this technique. After microbial surfaces are manually interpreted on slices, the images lack noise
NASA Astrophysics Data System (ADS)
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-01
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 5123 to 81923 voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and Ht (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume "Shepp and Logan" in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.
NASA Astrophysics Data System (ADS)
Degerman, J.; Winterfors, E.; Faijerson, J.; Gustavsson, T.
2007-02-01
This paper describes a computational model for image formation of in-vitro adult hippocampal progenitor (AHP) cells, in bright-field time-lapse microscopy. Although this microscopymodality barely generates sufficient contrast for imaging translucent cells, we show that by using a stack of defocused image slices it is possible to extract position and shape of spherically shaped specimens, such as the AHP cells. This inverse problem was solved by modeling the physical objects and image formation system, and using an iterative nonlinear optimization algorithm to minimize the difference between the reconstructed and measured image stack. By assuming that the position and shape of the cells do not change significantly between two time instances, we can optimize these parameters using the previous time instance in a Bayesian estimation approach. The 3D reconstruction algorithm settings, such as focal sampling distance, and PSF, were calibrated using latex spheres of known size and refractive index. By using the residual between reconstructed and measured image intensities, we computed a peak signal-to-noise ratio (PSNR) to 28 dB for the sphere stack. A biological specimen analysis was done using an AHP cell, where reconstruction PSNR was 28 dB as well. The cell was immuno-histochemically stained and scanned in a confocal microscope, in order to compare our cell model to a ground truth. After convergence the modelled cell volume had an error of less than one percent.
3D Ion Temperature Reconstruction
NASA Astrophysics Data System (ADS)
Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi
2009-11-01
The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-13
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 512{sup 3} to 8192{sup 3} voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and H{sup t} (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume 'Shepp and Logan' in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.
3D puzzle reconstruction for archeological fragments
NASA Astrophysics Data System (ADS)
Jampy, F.; Hostein, A.; Fauvet, E.; Laligant, O.; Truchetet, F.
2015-03-01
The reconstruction of broken artifacts is a common task in archeology domain; it can be supported now by 3D data acquisition device and computer processing. Many works have been dedicated in the past to reconstructing 2D puzzles but very few propose a true 3D approach. We present here a complete solution including a dedicated transportable 3D acquisition set-up and a virtual tool with a graphic interface allowing the archeologists to manipulate the fragments and to, interactively, reconstruct the puzzle. The whole lateral part is acquired by rotating the fragment around an axis chosen within a light sheet thanks to a step-motor synchronized with the camera frame clock. Another camera provides a top view of the fragment under scanning. A scanning accuracy of 100μm is attained. The iterative automatic processing algorithm is based on segmentation into facets of the lateral part of the fragments followed by a 3D matching providing the user with a ranked short list of possible assemblies. The device has been applied to the reconstruction of a set of 1200 fragments from broken tablets supporting a Latin inscription dating from the first century AD.
Geometric Neural Computing for 2D Contour and 3D Surface Reconstruction
NASA Astrophysics Data System (ADS)
Rivera-Rovelo, Jorge; Bayro-Corrochano, Eduardo; Dillmann, Ruediger
In this work we present an algorithm to approximate the surface of 2D or 3D objects combining concepts from geometric algebra and artificial neural networks. Our approach is based on the self-organized neural network called Growing Neural Gas (GNG), incorporating versors of the geometric algebra in its neural units; such versors are the transformations that will be determined during the training stage and then applied to a point to approximate the surface of the object. We also incorporate the information given by the generalized gradient vector flow to select automatically the input patterns, and also in the learning stage in order to improve the performance of the net. Several examples using medical images are presented, as well as images of automatic visual inspection. We compared the results obtained using snakes against the GSOM incorporating the gradient information and using versors. Such results confirm that our approach is very promising. As a second application, a kind of morphing or registration procedure is shown; namely the algorithm can be used when transforming one model at time t 1 into another at time t 2. We include also examples applying the same procedure, now extended to models based on spheres.
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
Forensic 3D Scene Reconstruction
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.
1999-10-12
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
Forensic 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.
2000-05-01
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Koizumi, Kohei
2014-10-01
The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz
2015-01-01
Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications. PMID:25775480
Borowska-Solonynko, Aleksandra; Solonynko, Bohdan
2015-02-01
Forensic pathologists are often called upon to determine the mechanism and severity of injuries in living individuals. Such expert testimony is often based solely on hand-written clinical notes. The victims' injuries may also be visualized via three-dimensional (3D) reconstruction of computed tomography (CT) images. This method has certain benefits but is not free from limitations. This paper presents two case reports. The first case is that of a female who was brought to the hospital with a knife thrust into her body. The prosecutor's questions focused on the wound channel. The information obtained from the patient's medical records was very general with many contradictory statements. A re-evaluation of the available CT scan data and a subsequent 3D reconstruction helped determine the exact course of the wound channel. The other case was that of a young male, hospitalized based on CT evidence of bilateral rib fractions, who claimed to have been assaulted by police officers. Court expert witnesses were already in possession of a 3D reconstruction showing symmetrical fractures of the patient's lower ribs with bone fragment displacement. An expert witness in radiology definitively excluded the presence of any actual fractures, and explained their apparent visibility in the three-dimensionally reconstructed image as a motion artifact. These two cases suggest that a professionally conducted 3D CT reconstruction is a very useful tool in providing expert testimony on injuries in living victims. However, the deceptive simplicity of conducting such a reconstruction may encourage inexperienced individuals to undertake it, and thus lead to erroneous conclusions. PMID:25623187
3D Computations and Experiments
Couch, R; Faux, D; Goto, D; Nikkel, D
2003-05-12
This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.
3-D flame temperature field reconstruction with multiobjective neural network
NASA Astrophysics Data System (ADS)
Wan, Xiong; Gao, Yiqing; Wang, Yuanmei
2003-02-01
A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.
3D EIT image reconstruction with GREIT.
Grychtol, Bartłomiej; Müller, Beat; Adler, Andy
2016-06-01
Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184
The PRISM3D paleoenvironmental reconstruction
Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.
2010-01-01
The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may
3D model reconstruction of underground goaf
NASA Astrophysics Data System (ADS)
Fang, Yuanmin; Zuo, Xiaoqing; Jin, Baoxuan
2005-10-01
Constructing 3D model of underground goaf, we can control the process of mining better and arrange mining work reasonably. However, the shape of goaf and the laneway among goafs are very irregular, which produce great difficulties in data-acquiring and 3D model reconstruction. In this paper, we research on the method of data-acquiring and 3D model construction of underground goaf, building topological relation among goafs. The main contents are as follows: a) The paper proposed an efficient encoding rule employed to structure the field measurement data. b) A 3D model construction method of goaf is put forward, which by means of combining several TIN (triangulated irregular network) pieces, and an efficient automatic processing algorithm of boundary of TIN is proposed. c) Topological relation of goaf models is established. TIN object is the basic modeling element of goaf 3D model, and the topological relation among goaf is created and maintained by building the topological relation among TIN objects. Based on this, various 3D spatial analysis functions can be performed including transect and volume calculation of goaf. A prototype is developed, which can realized the model and algorithm proposed in this paper.
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
IFSAR processing for 3D target reconstruction
NASA Astrophysics Data System (ADS)
Austin, Christian D.; Moses, Randolph L.
2005-05-01
In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.
3D reconstruction of tensors and vectors
Defrise, Michel; Gullberg, Grant T.
2005-02-17
Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.
NASA Astrophysics Data System (ADS)
Münster, S.; Kuroczyński, P.; Pfarr-Harfst, M.; Grellert, M.; Lengyel, D.
2015-08-01
The workgroup for Digital Reconstruction of the Digital Humanities in the German-speaking area association (Digital Humanities im deutschsprachigen Raum e.V.) was founded in 2014 as cross-disciplinary scientific society dealing with all aspects of digital reconstruction of cultural heritage and currently involves more than 40 German researchers. Moreover, the workgroup is dedicated to synchronise and foster methodological research for these topics. As one preliminary result a memorandum was created to name urgent research challenges and prospects in a condensed way and assemble a research agenda which could propose demands for further research and development activities within the next years. The version presented within this paper was originally created as a contribution to the so-called agenda development process initiated by the German Federal Ministry of Education and Research (BMBF) in 2014 and has been amended during a joint meeting of the digital reconstruction workgroup in November 2014.
Adapting 3D Equilibrium Reconstruction to Reconstruct Weakly 3D H-mode Tokamaks
NASA Astrophysics Data System (ADS)
Cianciosa, M. R.; Hirshman, S. P.; Seal, S. K.; Unterberg, E. A.; Wilcox, R. S.; Wingen, A.; Hanson, J. D.
2015-11-01
The application of resonant magnetic perturbations for edge localized mode (ELM) mitigation breaks the toroidal symmetry of tokamaks. In these scenarios, the axisymmetric assumptions of the Grad-Shafranov equation no longer apply. By extension, equilibrium reconstruction tools, built around these axisymmetric assumptions, are insufficient to fully reconstruct a 3D perturbed equilibrium. 3D reconstruction tools typically work on systems where the 3D components of signals are a significant component of the input signals. In nominally axisymmetric systems, applied field perturbations can be on the order of 1% of the main field or less. To reconstruct these equilibria, the 3D component of signals must be isolated from the axisymmetric portions to provide the necessary information for reconstruction. This presentation will report on the adaptation to V3FIT for application on DIII-D H-mode discharges with applied resonant magnetic perturbations (RMPs). Newly implemented motional stark effect signals and modeling of electric field effects will also be discussed. Work supported under U.S. DOE Cooperative Agreement DE-AC05-00OR22725.
3D medical volume reconstruction using web services.
Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter
2008-04-01
We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope. PMID:18336808
van Middendorp, Lars B; Maessen, Jos G; Sardari Nia, Peyman
2014-12-01
We describe the case of a 59-year old male patient undergoing combined coronary artery bypass grafting and aortic valve replacement. Manipulation of the heart during cardiopulmonary bypass significantly decreased venous return. Several measures were necessary to improve venous return to a level at which continuation of the procedure was safe. Based on the initial troubles with venous return, we decided to selectively cross-clamp the aorta. This resulted in a large amount of backflow of oxygenated blood from the left ventricle, necessitating additional vents in the pulmonary artery and directly in the left ventricle. The procedure was continued uneventfully, and postoperative recovery was without significant complications. Postoperative 2D computed tomography did not show any signs of a shunt, but 3D reconstruction showed a small patent ductus arteriosus. PMID:25164136
Automated 3D reconstruction of interiors with multiple scan views
NASA Astrophysics Data System (ADS)
Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.
1998-12-01
This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.
3D Computations and Experiments
Couch, R; Faux, D; Goto, D; Nikkel, D
2004-04-05
This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.
Tomographic system for 3D temperature reconstruction
NASA Astrophysics Data System (ADS)
Antos, Martin; Malina, Radomir
2003-11-01
The novel laboratory system for the optical tomography is used to obtain three-dimensional temperature field around a heated element. The Mach-Zehnder holographic interferometers with diffusive illumination of the phase object provide the possibility to scan of multidirectional holographic interferograms in the range of viewing angles from 0 deg to 108 deg. These interferograms form the input data for the computer tomography of the 3D distribution of the refractive index variation, which characterizes the physical state of the studied medium. The configuration of the system allows automatic projection scanning of the studied phase object. The computer calculates the wavefront deformation for each projection, making use of different methods of Fourier-transform and phase-sampling evaluations. The experimental set-up together with experimental results is presented.
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-01-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-03-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
Reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa
2013-08-01
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3
Reconstruction and 3D visualisation based on objective real 3D based documentation.
Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A
2012-09-01
Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427
A new algorithm for 3D reconstruction from support functions.
Gardner, Richard J; Kiderlen, Markus
2009-03-01
We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab, and it works for both 2D and 3D reconstructions (in fact, in principle, in any dimension). Reconstructions may be obtained without any pre- or post-processing steps and with no restriction on the sets of measurement directions except their number, a limitation dictated only by computing time. An algorithm due to Prince and Willsky was implemented earlier for 2D reconstructions, and we compare the performance of their algorithm and ours. But our algorithm is the first that works for 3D reconstructions with the freedom stated in the previous paragraph. Moreover, under mild conditions, theory guarantees that outputs of the new algorithm will converge to the input shape as the number of measurements increases. In addition we offer a linear program version of the new algorithm that is much faster and better, or at least comparable, in performance at low levels of noise and reasonably small numbers of measurements. Another modification of the algorithm, suitable for use in a "focus of attention" scheme, is also described. PMID:19147881
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.
Structured Light-Based 3D Reconstruction System for Plants
Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima
2015-01-01
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701
Structured Light-Based 3D Reconstruction System for Plants.
Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima
2015-01-01
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701
3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.
Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun
2016-08-01
Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. PMID:27200484
3D Surface Reconstruction and Automatic Camera Calibration
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre
2004-01-01
Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.
Oliveira-Santos, Thiago; Baumberger, Christian; Constantinescu, Mihai; Olariu, Radu; Nolte, Lutz-Peter; Alaraibi, Salman; Reyes, Mauricio
2013-05-01
The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient's wishes and to achieve the desired results. To date, most plastic surgeons rely on either "free hand" 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient's face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min. PMID:23319167
New method for 3D reconstruction in digital tomosynthesis
NASA Astrophysics Data System (ADS)
Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2002-05-01
Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.
NASA Astrophysics Data System (ADS)
Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.
2011-03-01
Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve
Dose fractionation theorem in 3-D reconstruction (tomography)
Glaeser, R.M.
1997-02-01
It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resulted in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.
Clinical Experience With A Portable 3-D Reconstruction Program
NASA Astrophysics Data System (ADS)
Holshouser, Barbara A.; Christiansen, Edwin L.; Thompson, Joseph R.; Reynolds, R. Anthony; Goldwasser, Samuel M.
1988-06-01
Clinical experience with a computer program for reconstructing and visualizing three-dimensional (3-D) structures is reported. Applications to the study of soft-tissue and skeletal structures, such as the temporomandibular joint and craniofacial anatomy, using computed tomography (CT) data are described. Several features specific to the computer algorithm are demonstrated and evaluated. These include: (1) manipulation of density windows to selectively visualize bone or soft tissue structures; (2) the efficacy of gradient shading algorithms in revealing fine surface detail; and (3) the rapid generation of cut-away views revealing details of internal structures. Also demonstrated is the importance of high resolution data as input to the 3-D program. The implementation of the program (VoxelView-32) described here, is on a MASSCOMP computer running UNIX. Data were collected with General Electric or Siemens CT scanners and transferred to the MASSCOMP for off-line 3-D recon-struction, via magnetic tape or Ethernet. An interactive graphics facility on the MASSCOMP allows viewing of 2-D slices, subregioning, and selection of lower and upper density thresholds for segmentation. The software then enters a pre-processing phase during which a volume representation of the segmented object (soft tissue or bone) is automatically created. This is followed by a rendering phase during which multiple views of the segmented object are automatically generated. The pre-processing phase typically takes 4 to 8 minutes (although very large datasets may require as much as 30 minutes) and the rendering phase typically takes 1 to 2 minutes for each 3-D view. Volume representation and rendering techniques are used at all stages of the processing, and gradient shading is used for enhanced surface detail.
3D segmentation and reconstruction of endobronchial ultrasound
NASA Astrophysics Data System (ADS)
Zang, Xiaonan; Breslav, Mikhail; Higgins, William E.
2013-03-01
State-of-the-art practice for lung-cancer staging bronchoscopy often draws upon a combination of endobronchial ultrasound (EBUS) and multidetector computed-tomography (MDCT) imaging. While EBUS offers real-time in vivo imaging of suspicious lesions and lymph nodes, its low signal-to-noise ratio and tendency to exhibit missing region-of-interest (ROI) boundaries complicate diagnostic tasks. Furthermore, past efforts did not incorporate automated analysis of EBUS images and a subsequent fusion of the EBUS and MDCT data. To address these issues, we propose near real-time automated methods for three-dimensional (3D) EBUS segmentation and reconstruction that generate a 3D ROI model along with ROI measurements. Results derived from phantom data and lung-cancer patients show the promise of the methods. In addition, we present a preliminary image-guided intervention (IGI) system example, whereby EBUS imagery is registered to a patient's MDCT chest scan.
3D scene reconstruction based on 3D laser point cloud combining UAV images
NASA Astrophysics Data System (ADS)
Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen
2016-03-01
It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.
3D Equilibrium Reconstructions in DIII-D
NASA Astrophysics Data System (ADS)
Lao, L. L.; Ferraro, N. W.; Strait, E. J.; Turnbull, A. D.; King, J. D.; Hirshman, H. P.; Lazarus, E. A.; Sontag, A. C.; Hanson, J.; Trevisan, G.
2013-10-01
Accurate and efficient 3D equilibrium reconstruction is needed in tokamaks for study of 3D magnetic field effects on experimentally reconstructed equilibrium and for analysis of MHD stability experiments with externally imposed magnetic perturbations. A large number of new magnetic probes have been recently installed in DIII-D to improve 3D equilibrium measurements and to facilitate 3D reconstructions. The V3FIT code has been in use in DIII-D to support 3D reconstruction and the new magnetic diagnostic design. V3FIT is based on the 3D equilibrium code VMEC that assumes nested magnetic surfaces. V3FIT uses a pseudo-Newton least-square algorithm to search for the solution vector. In parallel, the EFIT equilibrium reconstruction code is being extended to allow for 3D effects using a perturbation approach based on an expansion of the MHD equations. EFIT uses the cylindrical coordinate system and can include the magnetic island and stochastic effects. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria directly making use of plasma response to 3D perturbations from the GATO, MARS-F, or M3D-C1 MHD codes. DIII-D 3D reconstruction examples using EFIT and V3FIT and the new 3D magnetic data will be presented. Work supported in part by US DOE under DE-FC02-04ER54698, DE-FG02-95ER54309 and DE-AC05-06OR23100.
DIII-D Equilibrium Reconstructions with New 3D Magnetic Probes
NASA Astrophysics Data System (ADS)
Lao, Lang; Strait, E. J.; Ferraro, N. M.; Ferron, J. R.; King, J. D.; Lee, X.; Meneghini, O.; Turnbull, A. D.; Huang, Y.; Qian, J. G.; Wingen, A.
2015-11-01
DIII-D equilibrium reconstructions with the recently installed new 3D magnetic diagnostic are presented. In addition to providing information to allow more accurate 2D reconstructions, the new 3D probes also provide useful information to guide computation of 3D perturbed equilibria. A new more comprehensive magnetic compensation has been implemented. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria making use of the new 3D probes and plasma responses from 3D MHD codes such as GATO and M3D-C1. To improve the computation efficiency, all inactive probes in one of the toroidal planes in EFIT have been replaced with new probes from other planes. Other 3D efforts include testing of 3D reconstructions using V3FIT and a new 3D variational moment equilibrium code VMOM3D. Other EFIT developments include a GPU EFIT version and new safety factor and MSE-LS constraints. The accuracy and limitation of the new probes for 3D reconstructions will be discussed. Supported by US DOE under DE-FC02-04ER54698 and DE-FG02-95ER54309.
The sinogram polygonizer for reconstructing 3D shapes.
Yamanaka, Daiki; Ohtake, Yutaka; Suzuki, Hiromasa
2013-11-01
This paper proposes a novel approach, the sinogram polygonizer, for directly reconstructing 3D shapes from sinograms (i.e., the primary output from X-ray computed tomography (CT) scanners consisting of projection image sequences of an object shown from different viewing angles). To obtain a polygon mesh approximating the surface of a scanned object, a grid-based isosurface polygonizer, such as Marching Cubes, has been conventionally applied to the CT volume reconstructed from a sinogram. In contrast, the proposed method treats CT values as a continuous function and directly extracts a triangle mesh based on tetrahedral mesh deformation. This deformation involves quadratic error metric minimization and optimal Delaunay triangulation for the generation of accurate, high-quality meshes. Thanks to the analytical gradient estimation of CT values, sharp features are well approximated, even though the generated mesh is very coarse. Moreover, this approach eliminates aliasing artifacts on triangle meshes. PMID:24029910
The Sinogram Polygonizer for Reconstructing 3D Shapes.
Yamanaka, Daiki; Ohtake, Yutaka; Suzuki, Hiromasa
2013-05-24
This paper proposes a novel approach, the sinogram polygonizer, for directly reconstructing 3D shapes from sinograms (i.e., the primary output from X-ray computed tomography (CT) scanners consisting of projection image sequences of an object shown from different viewing angles). To obtain a polygon mesh approximating the surface of a scanned object, a grid-based isosurface polygonizer, such as Marching Cubes, has been conventionally applied to the CT volume reconstructed from a sinogram. In contrast, the proposed method treats CT values as a continuous function and directly extracts a triangle mesh based on tetrahedral mesh deformation. This deformation involves quadratic error metric minimization and optimal Delaunay triangulation for the generation of accurate, high-quality meshes. Thanks to the analytical gradient estimation of CT values, sharp features are well approximated, even though the generated mesh is very coarse. Moreover, this approach eliminates aliasing artifacts on triangle meshes. PMID:23712999
3D Building Reconstruction Using Dense Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.
2016-06-01
Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.
Digital 3D facial reconstruction of George Washington
NASA Astrophysics Data System (ADS)
Razdan, Anshuman; Schwartz, Jeff; Tocheri, Mathew; Hansford, Dianne
2006-02-01
PRISM is a focal point of interdisciplinary research in geometric modeling, computer graphics and visualization at Arizona State University. Many projects in the last ten years have involved laser scanning, geometric modeling and feature extraction from such data as archaeological vessels, bones, human faces, etc. This paper gives a brief overview of a recently completed project on the 3D reconstruction of George Washington (GW). The project brought together forensic anthropologists, digital artists and computer scientists in the 3D digital reconstruction of GW at 57, 45 and 19 including detailed heads and bodies. Although many other scanning projects such as the Michelangelo project have successfully captured fine details via laser scanning, our project took it a step further, i.e. to predict what that individual (in the sculpture) might have looked like both in later and earlier years, specifically the process to account for reverse aging. Our base data was GWs face mask at Morgan Library and Hudons bust of GW at Mount Vernon, both done when GW was 53. Additionally, we scanned the statue at the Capitol in Richmond, VA; various dentures, and other items. Other measurements came from clothing and even portraits of GW. The digital GWs were then milled in high density foam for a studio to complete the work. These will be unveiled at the opening of the new education center at Mt Vernon in fall 2006.
NASA Astrophysics Data System (ADS)
Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella
2015-09-01
Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
3D scene reconstruction from multi-aperture images
NASA Astrophysics Data System (ADS)
Mao, Miao; Qin, Kaihuai
2014-04-01
With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.
Fast fully 3-D image reconstruction in PET using planograms.
Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W
2004-04-01
We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067
Accuracy of 3d Reconstruction in AN Illumination Dome
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay; Toschi, Isabella; Nocerino, Erica; Hess, Mona; Remondino, Fabio; Robson, Stuart
2016-06-01
The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.
Improving 3D Genome Reconstructions Using Orthologous and Functional Constraints
Diament, Alon; Tuller, Tamir
2015-01-01
The study of the 3D architecture of chromosomes has been advancing rapidly in recent years. While a number of methods for 3D reconstruction of genomic models based on Hi-C data were proposed, most of the analyses in the field have been performed on different 3D representation forms (such as graphs). Here, we reproduce most of the previous results on the 3D genomic organization of the eukaryote Saccharomyces cerevisiae using analysis of 3D reconstructions. We show that many of these results can be reproduced in sparse reconstructions, generated from a small fraction of the experimental data (5% of the data), and study the properties of such models. Finally, we propose for the first time a novel approach for improving the accuracy of 3D reconstructions by introducing additional predicted physical interactions to the model, based on orthologous interactions in an evolutionary-related organism and based on predicted functional interactions between genes. We demonstrate that this approach indeed leads to the reconstruction of improved models. PMID:26000633
Tomographic compressive holographic reconstruction of 3D objects
NASA Astrophysics Data System (ADS)
Nehmetallah, G.; Williams, L.; Banerjee, P. P.
2012-10-01
Compressive holography with multiple projection tomography is applied to solve the inverse ill-posed problem of reconstruction of 3D objects with high axial accuracy. To visualize the 3D shape, we propose Digital Tomographic Compressive Holography (DiTCH), where projections from more than one direction as in tomographic imaging systems can be employed, so that a 3D shape with better axial resolution can be reconstructed. We compare DiTCH with single-beam holographic tomography (SHOT) which is based on Fresnel back-propagation. A brief theory of DiTCH is presented, and experimental results of 3D shape reconstruction of objects using DITCH and SHOT are compared.
Single view-based 3D face reconstruction robust to self-occlusion
NASA Astrophysics Data System (ADS)
Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie
2012-12-01
State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.
Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.
Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed
2009-06-01
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376
3D imaging reconstruction and impacted third molars: case reports
Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea
2012-01-01
Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934
3D Reconstruction For The Detection Of Cranial Anomalies
NASA Astrophysics Data System (ADS)
Kettner, B.; Shalev, S.; Lavelle, C.
1986-01-01
There is a growing interest in the use of three-dimensional (3D) cranial reconstruction from CT scans for surgical planning. A low-cost imaging system has been developed, which provides pseudo-3D images which may be manipulated to reveal the craniofacial skeleton as a whole or any particular component region. The contrast between congenital (hydrocephalic), normocephalic and acquired (carcinoma of the maxillary sinus) anomalous cranial forms demonstrates the potential of this system.
Bound constrained bundle adjustment for reliable 3D reconstruction.
Gong, Yuanzheng; Meng, De; Seibel, Eric J
2015-04-20
Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment. PMID:25969115
Bound constrained bundle adjustment for reliable 3D reconstruction
Gong, Yuanzheng; Meng, De; Seibel, Eric J.
2015-01-01
Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment. PMID:25969115
3D scanning modeling method application in ancient city reconstruction
NASA Astrophysics Data System (ADS)
Ren, Pu; Zhou, Mingquan; Du, Guoguang; Shui, Wuyang; Zhou, Pengbo
2015-07-01
With the development of optical engineering technology, the precision of 3D scanning equipment becomes higher, and its role in 3D modeling is getting more distinctive. This paper proposed a 3D scanning modeling method that has been successfully applied in Chinese ancient city reconstruction. On one hand, for the existing architectures, an improved algorithm based on multiple scanning is adopted. Firstly, two pieces of scanning data were rough rigid registered using spherical displacers and vertex clustering method. Secondly, a global weighted ICP (iterative closest points) method is used to achieve a fine rigid registration. On the other hand, for the buildings which have already disappeared, an exemplar-driven algorithm for rapid modeling was proposed. Based on the 3D scanning technology and the historical data, a system approach was proposed for 3D modeling and virtual display of ancient city.
3D reconstruction with two webcams and a laser line projector
NASA Astrophysics Data System (ADS)
Li, Dongdong; Hui, Bingwei; Qiu, Shaohua; Wen, Gongjian
2014-09-01
Three-dimensional (3D) reconstruction is one of the most attractive research topics in photogrammetry and computer vision. Nowadays 3D reconstruction with simple and consumable equipment plays an important role. In this paper, a 3D reconstruction desktop system is built based on binocular stereo vision using a laser scanner. The hardware requirements are a simple commercial hand-held laser line projector and two common webcams for image acquisition. Generally, 3D reconstruction based on passive triangulation methods requires point correspondences among various viewpoints. The development of matching algorithms remains a challenging task in computer vision. In our proposal, with the help of a laser line projector, stereo correspondences are established robustly from epipolar geometry and the laser shadow on the scanned object. To establish correspondences more conveniently, epipolar rectification is employed using Bouguet's method after stereo calibration with a printed chessboard. 3D coordinates of the observed points are worked out with rayray triangulation and reconstruction outliers are removed with the planarity constraint of the laser plane. Dense 3D point clouds are derived from multiple scans under different orientations. Each point cloud is derived by sweeping the laser plane across the object requiring 3D reconstruction. The Iterative Closest Point algorithm is employed to register the derived point clouds. Rigid body transformation between neighboring scans is obtained to get the complete 3D point cloud. Finally polygon meshes are reconstructed from the derived point cloud and color images are used in texture mapping to get a lifelike 3D model. Experiments show that our reconstruction method is simple and efficient.
New Reconstruction Accuracy Metric for 3D PIV
NASA Astrophysics Data System (ADS)
Bajpayee, Abhishek; Techet, Alexandra
2015-11-01
Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
3D Reconstruction of Human Motion from Monocular Image Sequences.
Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo
2016-08-01
This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439
Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard; Neal, Douglas
2011-11-01
A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
3D video sequence reconstruction algorithms implemented on a DSP
NASA Astrophysics Data System (ADS)
Ponomaryov, V. I.; Ramos-Diaz, E.
2011-03-01
A novel approach for 3D image and video reconstruction is proposed and implemented. This is based on the wavelet atomic functions (WAF) that have demonstrated better approximation properties in different processing problems in comparison with classical wavelets. Disparity maps using WAF are formed, and then they are employed in order to present 3D visualization using color anaglyphs. Additionally, the compression via Pth law is performed to improve the disparity map quality. Other approaches such as optical flow and stereo matching algorithm are also implemented as the comparative approaches. Numerous simulation results have justified the efficiency of the novel framework. The implementation of the proposed algorithm on the Texas Instruments DSP TMS320DM642 permits to demonstrate possible real time processing mode during 3D video reconstruction for images and video sequences.
Incremental volume reconstruction and rendering for 3-D ultrasound imaging
NASA Astrophysics Data System (ADS)
Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry
1992-09-01
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
[3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].
Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu
2015-08-01
The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449
On detailed 3D reconstruction of large indoor environments
NASA Astrophysics Data System (ADS)
Bondarev, Egor
2015-03-01
In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.
3D reconstruction methods of coronal structures by radio observations
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-11-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
3D reconstruction methods of coronal structures by radio observations
NASA Technical Reports Server (NTRS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-01-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
Analysis of method of 3D shape reconstruction using scanning deflectometry
NASA Astrophysics Data System (ADS)
Novák, Jiří; Novák, Pavel; Mikš, Antonín.
2013-04-01
This work presents a scanning deflectometric approach to solving a 3D surface reconstruction problem, which is based on measurements of a surface gradient of optically smooth surfaces. It is shown that a description of this problem leads to a nonlinear partial differential equation (PDE) of the first order, from which the surface shape can be reconstructed numerically. The method for effective finding of the solution of this differential equation is proposed, which is based on the transform of the problem of PDE solving to the optimization problem. We describe different types of surface description for the shape reconstruction and a numerical simulation of the presented method is performed. The reconstruction process is analyzed by computer simulations and presented on examples. The performed analysis confirms a robustness of the reconstruction method and a good possibility for measurements and reconstruction of the 3D shape of specular surfaces.
Scattering robust 3D reconstruction via polarized transient imaging.
Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai
2016-09-01
Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944
Optical Sensors and Methods for Underwater 3D Reconstruction.
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Near-infrared optical imaging of human brain based on the semi-3D reconstruction algorithm
NASA Astrophysics Data System (ADS)
Liu, Ming; Meng, Wei; Qin, Zhuanping; Zhou, Xiaoqing; Zhao, Huijuan; Gao, Feng
2013-03-01
In the non-invasive brain imaging with near-infrared light, precise head model is of great significance to the forward model and the image reconstruction. To deal with the individual difference of human head tissues and the problem of the irregular curvature, in this paper, we extracted head structure with Mimics software from the MRI image of a volunteer. This scheme makes it possible to assign the optical parameters to every layer of the head tissues reasonably and solve the diffusion equation with the finite-element analysis. During the solution of the inverse problem, a semi-3D reconstruction algorithm is adopted to trade off the computation cost and accuracy between the full 3-D and the 2-D reconstructions. In this scheme, the changes in the optical properties of the inclusions are assumed either axially invariable or confined to the imaging plane, while the 3-D nature of the photon migration is still retained. This therefore leads to a 2-D inverse issue with the matched 3-D forward model. Simulation results show that comparing to the 3-D reconstruction algorithm, the Semi-3D reconstruction algorithm cut 27% the calculation time consumption.
3-D reconstruction of neurons from multichannel confocal laser scanning image series.
Wouterlood, Floris G
2014-01-01
A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. PMID:24723320
3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells
Luo, Tong; Chen, Huan; Kassab, Ghassan S.
2016-01-01
Aims The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function. PMID:26882342
3D ultrasound computer tomography: update from a clinical study
NASA Astrophysics Data System (ADS)
Hopp, T.; Zapf, M.; Kretzek, E.; Henrich, J.; Tukalo, A.; Gemmeke, H.; Kaiser, C.; Knaudt, J.; Ruiter, N. V.
2016-04-01
Ultrasound Computer Tomography (USCT) is a promising new imaging method for breast cancer diagnosis. We developed a 3D USCT system and tested it in a pilot study with encouraging results: 3D USCT was able to depict two carcinomas, which were present in contrast enhanced MRI volumes serving as ground truth. To overcome severe differences in the breast shape, an image registration was applied. We analyzed the correlation between average sound speed in the breast and the breast density estimated from segmented MRIs and found a positive correlation with R=0.70. Based on the results of the pilot study we now carry out a successive clinical study with 200 patients. For this we integrated our reconstruction methods and image post-processing into a comprehensive workflow. It includes a dedicated DICOM viewer for interactive assessment of fused USCT images. A new preview mode now allows intuitive and faster patient positioning. We updated the USCT system to decrease the data acquisition time by approximately factor two and to increase the penetration depth of the breast into the USCT aperture by 1 cm. Furthermore the compute-intensive reflectivity reconstruction was considerably accelerated, now allowing a sub-millimeter volume reconstruction in approximately 16 minutes. The updates made it possible to successfully image first patients in our ongoing clinical study.
3D temperature field reconstruction using ultrasound sensing system
NASA Astrophysics Data System (ADS)
Liu, Yuqian; Ma, Tong; Cao, Chengyu; Wang, Xingwei
2016-04-01
3D temperature field reconstruction is of practical interest to the power, transportation and aviation industries and it also opens up opportunities for real time control or optimization of high temperature fluid or combustion process. In our paper, a new distributed optical fiber sensing system consisting of a series of elements will be used to generate and receive acoustic signals. This system is the first active temperature field sensing system that features the advantages of the optical fiber sensors (distributed sensing capability) and the acoustic sensors (non-contact measurement). Signals along multiple paths will be measured simultaneously enabled by a code division multiple access (CDMA) technique. Then a proposed Gaussian Radial Basis Functions (GRBF)-based approach can approximate the temperature field as a finite summation of space-dependent basis functions and time-dependent coefficients. The travel time of the acoustic signals depends on the temperature of the media. On this basis, the Gaussian functions are integrated along a number of paths which are determined by the number and distribution of sensors. The inversion problem to estimate the unknown parameters of the Gaussian functions can be solved with the measured times-of-flight (ToF) of acoustic waves and the length of propagation paths using the recursive least square method (RLS). The simulation results show an approximation error less than 2% in 2D and 5% in 3D respectively. It demonstrates the availability and efficiency of our proposed 3D temperature field reconstruction mechanism.
One-step reconstruction of assembled 3D holographic scenes
NASA Astrophysics Data System (ADS)
Velez Zea, Alejandro; Barrera-Ramírez, John Fredy; Torroba, Roberto
2015-12-01
We present a new experimental approach for reconstructing in one step 3D scenes otherwise not feasible in a single snapshot from standard off-axis digital hologram architecture, due to a lack of illuminating resources or a limited setup size. Consequently, whenever a scene could not be wholly illuminated or the size of the scene surpasses the available setup disposition, this protocol can be implemented to solve these issues. We need neither to alter the original setup in every step nor to cover the whole scene by the illuminating source, thus saving resources. With this technique we multiplex the processed holograms of actual diffuse objects composing a scene using a two-beam off-axis holographic setup in a Fresnel approach. By registering individually the holograms of several objects and applying a spatial filtering technique, the filtered Fresnel holograms can then be added to produce a compound hologram. The simultaneous reconstruction of all objects is performed in one step using the same recovering procedure employed for single holograms. Using this technique, we were able to reconstruct, for the first time to our knowledge, a scene by multiplexing off-axis holograms of the 3D objects without cross talk. This technique is important for quantitative visualization of optically packaged multiple images and is useful for a wide range of applications. We present experimental results to support the method.
Real-Time Camera Guidance for 3d Scene Reconstruction
NASA Astrophysics Data System (ADS)
Schindler, F.; Förstner, W.
2012-07-01
We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.
3D reconstruction on CBCT in the cystic pathology of the jaws
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
The paper presents the image acquisition of Cone Beam Computer Tomography scans of human facial bones and their processing in order to obtain a 3D reconstruction model of the skull. The reconstructed model provides useful data to the physician in situations of maxillary cystic pathology but more important is the data about the relationship of the maxillary cyst with the surrounding anatomical elements. Using the B-splines a 3D volume model of the human facial bones can be achieved. This model can be exported in any CAD system, resulting a virtual model witch can be used in FEM analysis.
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Facial-paralysis diagnostic system based on 3D reconstruction
NASA Astrophysics Data System (ADS)
Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee
2015-05-01
The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.
3D-reconstruction of blood vessels by ultramicroscopy
Jährling, Nina; Becker, Klaus
2009-01-01
As recently shown, ultramicroscopy (UM) allows 3D-visualization of even large microscopic structures with µm resolution. Thus, it can be applied to anatomical studies of numerous biological and medical specimens. We reconstructed the three-dimensional architecture of tomato-lectin (Lycopersicon esculentum) stained vascular networks by UM in whole mouse organs. The topology of filigree branches of the microvasculature was visualized. Since tumors require an extensive growth of blood vessels to survive, this novel approach may open up new vistas in neurobiology and histology, particularly in cancer research. PMID:20539742
Discussion of Source Reconstruction Models Using 3D MCG Data
NASA Astrophysics Data System (ADS)
Melis, Massimo De; Uchikawa, Yoshinori
In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.
Computerized 3-D reconstruction of two "double teeth".
Lyroudia, K; Mikrogeorgis, G; Nikopoulos, N; Samakovitis, G; Molyvdas, I; Pitas, I
1997-10-01
"Double teeth" is a root malformation in the dentition and the purpose of this study was to reconstruct three-dimensionally the external and internal morphology of two "double teeth". The first set of "double teeth" was formed by the conjunction of a mandibular molar and a premolar, and the second by a conjunction of a maxillary molar and a supernumerary tooth. The process of 3-D reconstruction included serial cross-sectioning, photographs of the sections, digitization of the photographs, extraction of the boundaries of interest for each section, surface representation using triangulation and, finally, surface rendering using photorealistic effects. The resulting three-dimensional representations of the two teeth helped us visualize their external and internal anatomy. The results showed: a) in the first case, fusion of the radical and coronal dentin, as well as fusion of the pulp chambers; and b) in the second case, fusion only of the radical dentin and the pulp chambers. PMID:9550051
Digital Reconstruction of 3D Polydisperse Dry Foam
NASA Astrophysics Data System (ADS)
Chieco, A.; Feitosa, K.; Roth, A. E.; Korda, P. T.; Durian, D. J.
2012-02-01
Dry foam is a disordered packing of bubbles that distort into familiar polyhedral shapes. We have implemented a method that uses optical axial tomography to reconstruct the internal structure of a dry foam in three dimensions. The technique consists of taking a series of photographs of the dry foam against a uniformly illuminated background at successive angles. By summing the projections we create images of the foam cross section. Image analysis of the cross sections allows us to locate Plateau borders and vertices. The vertices are then connected according to Plateau's rules to reconstruct the internal structure of the foam. Using this technique we are able to visualize a large number of bubbles of real 3D foams and obtain statistics of faces and edges.
Automated reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.
Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.
Fast vision-based catheter 3D reconstruction.
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D
2016-07-21
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms. PMID:27352011
3D Reconstruction of virtual colon structures from colonoscopy images.
Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C
2014-01-01
This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230
Fast vision-based catheter 3D reconstruction
NASA Astrophysics Data System (ADS)
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.
2016-07-01
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.
3D Reconstruction of Irregular Buildings and Buddha Statues
NASA Astrophysics Data System (ADS)
Zhang, K.; Li, M.-j.
2014-04-01
Three-dimensional laser scanning could acquire object's surface data quickly and accurately. However, the post-processing of point cloud is not perfect and could be improved. Based on the study of 3D laser scanning technology, this paper describes the details of solutions to modelling irregular ancient buildings and Buddha statues in Jinshan Temple, which aiming at data acquisition, modelling and texture mapping, etc. In order to modelling irregular ancient buildings effectively, the structure of each building is extracted manually by point cloud and the textures are mapped by the software of 3ds Max. The methods clearly combine 3D laser scanning technology with traditional modelling methods, and greatly improves the efficiency and accuracy of the ancient buildings restored. On the other hand, the main idea of modelling statues is regarded as modelling objects in reverse engineering. The digital model of statues obtained is not just vivid, but also accurate in the field of surveying and mapping. On this basis, a 3D scene of Jinshan Temple is reconstructed, which proves the validity of the solutions.
Zhou, Zhi; Liu, Xiaoxiao; Long, Brian; Peng, Hanchuan
2016-01-01
Efficient and accurate digital reconstruction of neurons from large-scale 3D microscopic images remains a challenge in neuroscience. We propose a new automatic 3D neuron reconstruction algorithm, TReMAP, which utilizes 3D Virtual Finger (a reverse-mapping technique) to detect 3D neuron structures based on tracing results on 2D projection planes. Our fully automatic tracing strategy achieves close performance with the state-of-the-art neuron tracing algorithms, with the crucial advantage of efficient computation (much less memory consumption and parallel computation) for large-scale images. PMID:26306866
Online reconstruction of 3D magnetic particle imaging data
NASA Astrophysics Data System (ADS)
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Online reconstruction of 3D magnetic particle imaging data.
Knopp, T; Hofmann, M
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
Chen, G; Pan, X; Stayman, J; Samei, E
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N
2014-06-01
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)
3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2012-01-01
Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079
3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2012-01-01
Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079
Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.
2016-04-01
Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.
Robust 3D reconstruction system for human jaw modeling
NASA Astrophysics Data System (ADS)
Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.
1999-03-01
This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.
Fast and efficient particle reconstruction on a 3D grid using sparsity
NASA Astrophysics Data System (ADS)
Cornic, P.; Champagnat, F.; Cheminet, A.; Leclaire, B.; Le Besnerais, G.
2015-03-01
We propose an approach for efficient localization and intensity reconstruction of particles on a 3D grid based on sparsity principles. The computational complexity of the method is limited by using the particle volume reconstruction paradigm (Champagnat et al. in Meas Sci Technol 25, 2014) and a reduction in the problem dimension. Tests on synthetic and experimental data show that the proposed method leads to more efficient detections and to reconstructions of higher quality than classical tomoPIV approaches on a large range of seeding densities, up to ppp ≈ 0.12.
3D face reconstruction from limited images based on differential evolution
NASA Astrophysics Data System (ADS)
Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.
2011-09-01
3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.
Colored 3D surface reconstruction using Kinect sensor
NASA Astrophysics Data System (ADS)
Guo, Lian-peng; Chen, Xiang-ning; Chen, Ying; Liu, Bin
2015-03-01
A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.
Computer-generated hologram for 3D scene from multi-view images
NASA Astrophysics Data System (ADS)
Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong
2013-05-01
Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.
A preliminary investigation of 3D preconditioned conjugate gradient reconstruction for cone-beam CT
NASA Astrophysics Data System (ADS)
Fu, Lin; De Man, Bruno; Zeng, Kai; Benson, Thomas M.; Yu, Zhou; Cao, Guangzhi; Thibault, Jean-Baptiste
2012-03-01
Model-based iterative reconstruction (MBIR) methods based on maximum a posteriori (MAP) estimation have been recently introduced to multi-slice CT scanners. The model-based approach has shown promising image quality improvement with reduced radiation dose compared to conventional FBP methods, but the associated high computation cost limits its widespread use in clinical environments. Among the various choices of numerical algorithms to optimize the MAP cost function, simultaneous update methods such as the conjugate gradient (CG) method have a relatively high level of parallelism to take full advantage of a new generation of many-core computing hardware. With proper preconditioning techniques, fast convergence speeds of CG algorithms have been demonstrated in 3D emission and 2D transmission reconstruction. However, 3D transmission reconstruction using preconditioned conjugate gradient (PCG) has not been reported. Additional challenges in applying PCG in 3D CT reconstruction include the large size of clinical CT data, shift-variant and incomplete sampling, and complex regularization schemes to meet the diagnostic standard of image quality. In this paper, we present a ramp-filter based PCG algorithm for 3D CT MBIR. Convergence speeds of algorithms with and without using the preconditioner are compared.
Using flow information to support 3D vessel reconstruction from rotational angiography
Waechter, Irina; Bredno, Joerg; Weese, Juergen; Barratt, Dean C.; Hawkes, David J.
2008-07-15
For the assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) morphologic and hemodynamic information about the vessel system. Rotational angiography is routinely used to image the 3D vascular geometry and we have shown previously that rotational subtraction angiography has the potential to also give quantitative information about blood flow. Flow information can be determined when the angiographic sequence shows inflow and possibly outflow of contrast agent. However, a standard volume reconstruction assumes that the vessel tree is uniformly filled with contrast agent during the whole acquisition. If this is not the case, the reconstruction exhibits artifacts. Here, we show how flow information can be used to support the reconstruction of the 3D vessel centerline and radii in this case. Our method uses the fast marching algorithm to determine the order in which voxels are analyzed. For every voxel, the rotational time intensity curve (R-TIC) is determined from the image intensities at the projection points of the current voxel. Next, the bolus arrival time of the contrast agent at the voxel is estimated from the R-TIC. Then, a measure of the intensity and duration of the enhancement is determined, from which a speed value is calculated that steers the propagation of the fast marching algorithm. The results of the fast marching algorithm are used to determine the 3D centerline by backtracking. The 3D radius is reconstructed from 2D radius estimates on the projection images. The proposed method was tested on computer simulated rotational angiography sequences with systematically varied x-ray acquisition, blood flow, and contrast agent injection parameters and on datasets from an experimental setup using an anthropomorphic cerebrovascular phantom. For the computer simulation, the mean absolute error of the 3D centerline and 3D radius estimation was 0.42 and 0.25 mm, respectively. For the experimental datasets, the mean absolute
3D parameter reconstruction in hyperspectral diffuse optical tomography
NASA Astrophysics Data System (ADS)
Saibaba, Arvind K.; Krishnamurthy, Nishanth; Anderson, Pamela G.; Kainerstorfer, Jana M.; Sassaroli, Angelo; Miller, Eric L.; Fantini, Sergio; Kilmer, Misha E.
2015-03-01
The imaging of shape perturbation and chromophore concentration using Diffuse Optical Tomography (DOT) data can be mathematically described as an ill-posed and non-linear inverse problem. The reconstruction algorithm for hyperspectral data using a linearized Born model is prohibitively expensive, both in terms of computation and memory. We model the shape of the perturbation using parametric level-set approach (PaLS). We discuss novel computational strategies for reducing the computational cost based on a Krylov subspace approach for parameteric linear systems and a compression strategy for the parameter-to-observation map. We will demonstrate the validity of our approach by comparison with experiments.
3D reconstruction of rotational video microscope based on patches
NASA Astrophysics Data System (ADS)
Ma, Shijie; Qu, Yufu
2015-11-01
Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.
Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A
2015-08-01
Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. PMID:26037323
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
A new method to combine 3D reconstruction volumes for multiple parallel circular cone beam orbits
Baek, Jongduk; Pelc, Norbert J.
2010-01-01
Purpose: This article presents a new reconstruction method for 3D imaging using a multiple 360° circular orbit cone beam CT system, specifically a way to combine 3D volumes reconstructed with each orbit. The main goal is to improve the noise performance in the combined image while avoiding cone beam artifacts. Methods: The cone beam projection data of each orbit are reconstructed using the FDK algorithm. When at least a portion of the total volume can be reconstructed by more than one source, the proposed combination method combines these overlap regions using weighted averaging in frequency space. The local exactness and the noise performance of the combination method were tested with computer simulations of a Defrise phantom, a FORBILD head phantom, and uniform noise in the raw data. Results: A noiseless simulation showed that the local exactness of the reconstructed volume from the source with the smallest tilt angle was preserved in the combined image. A noise simulation demonstrated that the combination method improved the noise performance compared to a single orbit reconstruction. Conclusions: In CT systems which have overlap volumes that can be reconstructed with data from more than one orbit and in which the spatial frequency content of each reconstruction can be calculated, the proposed method offers improved noise performance while keeping the local exactness of data from the source with the smallest tilt angle. PMID:21089770
The new CORIMP CME catalog & 3D reconstructions
NASA Astrophysics Data System (ADS)
Byrne, Jason; Morgan, Huw; Gallagher, Peter; Habbal, Shadia; Davies, Jackie
2015-04-01
A new coronal mass ejection catalog has been built from a unique set of coronal image processing techniques, called CORIMP, that overcomes many of the limitations of current catalogs in operation. An online database has been produced for the SOHO/LASCO data and event detections therein; providing information on CME onset time, position angle, angular width, speed, acceleration, and mass, along with kinematic plots and observation movies. The high-fidelity and robustness of these methods and derived CME structure and kinematics will lead to an improved understanding of the dynamics of CMEs, and a realtime version of the algorithm has been implemented to provide CME detection alerts to the interested space weather community. Furthermore, STEREO data has been providing the ability to perform 3D reconstructions of CMEs that are observed in multipoint observations. This allows a determination of the 3D kinematics and morphologies of CMEs characterised in STEREO data via the 'elliptical tie-pointing' technique. The associated observations of SOHO, SDO and PROBA2 (and intended use of K-Cor) provide additional measurements and constraints on the CME analyses in order to improve their accuracy.
Computed 3D visualisation of an extinct cephalopod using computer tomographs
NASA Astrophysics Data System (ADS)
Lukeneder, Alexander
2012-08-01
The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal.
Computed 3D visualisation of an extinct cephalopod using computer tomographs
Lukeneder, Alexander
2012-01-01
The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal. PMID:24850976
Computed 3D visualisation of an extinct cephalopod using computer tomographs.
Lukeneder, Alexander
2012-08-01
The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal. PMID:24850976
Characterizing heterogeneity among virus particles by stochastic 3D signal reconstruction
NASA Astrophysics Data System (ADS)
Xu, Nan; Gong, Yunye; Wang, Qiu; Zheng, Yili; Doerschuk, Peter C.
2015-09-01
In single-particle cryo electron microscopy, many electron microscope images each of a single instance of a biological particle such as a virus or a ribosome are measured and the 3-D electron scattering intensity of the particle is reconstructed by computation. Because each instance of the particle is imaged separately, it should be possible to characterize the heterogeneity of the different instances of the particle as well as a nominal reconstruction of the particle. In this paper, such an algorithm is described and demonstrated on the bacteriophage Hong Kong 97. The algorithm is a statistical maximum likelihood estimator computed by an expectation maximization algorithm implemented in Matlab software.
Lanza, Alessandro; Laino, Luigi; Rossiello, Luigi; Perillo, Letizia; Ermo, Antonio Dell; Cirillo, Nicola
2008-01-01
A wide range of diseases may present with radiographic features of osteolysis. Periapical inflammation, cysts and benign tumours, bone malignancies, all of these conditions may show bone resorption on radiograph. Features of the surrounding bone, margins of the lesion, and biological behaviour including tendency to infiltration and root resorption, may represent important criteria for distinguishing benign tumours from their malign counterpart, although the radiographic aspect of the lesion is not always predictive. Therefore a critical differential diagnosis has to be reached to choose the best management. Here, we report a case of giant cell tumour (GCT) whose radiological features by computed tomography (CT) suggested the presence of bone malignancy, whereas the evaluation of a routine OPT scan comforted us about the benign nature of the lesion. A brief review of the literature on such a benign but locally aggressive neoplasm is also provided. PMID:19088886
Dense point-cloud creation using superresolution for a monocular 3D reconstruction system
NASA Astrophysics Data System (ADS)
Diskin, Yakov; Asari, Vijayan K.
2012-05-01
We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.
3D Alternating Direction TV-Based Cone-Beam CT Reconstruction with Efficient GPU Implementation
Cai, Ailong; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Guan, Min; Li, Jianxin
2014-01-01
Iterative image reconstruction (IIR) with sparsity-exploiting methods, such as total variation (TV) minimization, claims potentially large reductions in sampling requirements. However, the computation complexity becomes a heavy burden, especially in 3D reconstruction situations. In order to improve the performance for iterative reconstruction, an efficient IIR algorithm for cone-beam computed tomography (CBCT) with GPU implementation has been proposed in this paper. In the first place, an algorithm based on alternating direction total variation using local linearization and proximity technique is proposed for CBCT reconstruction. The applied proximal technique avoids the horrible pseudoinverse computation of big matrix which makes the proposed algorithm applicable and efficient for CBCT imaging. The iteration for this algorithm is simple but convergent. The simulation and real CT data reconstruction results indicate that the proposed algorithm is both fast and accurate. The GPU implementation shows an excellent acceleration ratio of more than 100 compared with CPU computation without losing numerical accuracy. The runtime for the new 3D algorithm is about 6.8 seconds per loop with the image size of 256 × 256 × 256 and 36 projections of the size of 512 × 512. PMID:25045400
Gene Electrotransfer in 3D Reconstructed Human Dermal Tissue.
Madi, Moinecha; Rols, Marie-Pierre; Gibot, Laure
2016-01-01
Gene electrotransfer into the skin is of particular interest for the development of medical applications including DNA vaccination, cancer treatment, wound healing or treatment of local skin disorders. However, such clinical applications are currently limited due to poor understanding of the mechanisms governing DNA electrotransfer within human tissue. Nowadays, most studies are carried out in rodent models but rodent skin varies from human skin in terms of cell composition and architecture. We used a tissue-engineering approach to study gene electrotransfer mechanisms in a human tissue context. Primary human dermal fibroblasts were cultured according to the self-assembly method to produce 3D reconstructed human dermal tissue. In this study, we showed that cells of the reconstructed cutaneous tissue were efficiently electropermeabilized by applying millisecond electric pulses, without affecting their viability. A reporter gene was successfully electrotransferred into this human tissue and gene expression was detected for up to 48h. Interestingly, the transfected cells were solely located on the upper surface of the tissue, where they were in close contact with plasmid DNA solution. Furthermore, we report evidences that electrotransfection success depends on plasmid mobility within tissue- rich in collagens, but not on cell proliferation status. In conclusion, in addition to proposing a reliable alternative to animal experiments, tissue engineering produces valid biological tool for the in vitro study of gene electrotransfer mechanisms in human tissue. PMID:27029947
Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces
NASA Astrophysics Data System (ADS)
Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf
2016-06-01
The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.
Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.
Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee
2015-12-01
3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope. PMID:26498516
Glasses for 3D ultrasound computer tomography: phase compensation
NASA Astrophysics Data System (ADS)
Zapf, M.; Hopp, T.; Ruiter, N. V.
2016-03-01
Ultrasound Computer Tomography (USCT), developed at KIT, is a promising new imaging system for breast cancer diagnosis, and was successfully tested in a pilot study. The 3D USCT II prototype consists of several hundreds of ultrasound (US) transducers on a semi-ellipsoidal aperture. Spherical waves are sequentially emitted by individual transducers and received in parallel by many transducers. Reflectivity volumes are reconstructed by synthetic aperture focusing (SAFT). However, straight forward SAFT imaging leads to blurred images due to system imperfections. We present an extension of a previously proposed approach to enhance the images. This approach includes additional a priori information and system characteristics. Now spatial phase compensation was included. The approach was evaluated with a simulation and clinical data sets. An increase in the image quality was observed and quantitatively measured by SNR and other metrics.
3D volume reconstruction of a mouse brain histological sections using warp filtering
Ju, Tao; Warren, Joe; Carson, James P.; Bello, Musodiq; Kakadiaris, Ioannis; Chiu, Wah; Thaller, Christina; Eichele, Gregor
2006-09-30
Sectioning tissues for optical microscopy often introduces upon the resulting sections distortions that make 3D reconstruction difficult. Here we present an automatic method for producing a smooth 3D volume from distorted 2D sections in the absence of any undistorted references. The method is based on pairwise elastic image warps between successive tissue sections, which can be computed by 2D image registration. Using a Gaussian filter, an average warp is computed for each section from the pairwise warps in a group of its neighboring sections. The average warps deform each section to match its neighboring sections, thus creating a smooth volume where corresponding features on successive sections lie close to each other. The proposed method can be used with any existing 2D image registration method for 3D reconstruction. In particular, we present a novel image warping algorithm based on dynamic programming that extends Dynamic Time Warping in 1D speech recognition to compute pairwise warps between high-resolution 2D images. The warping algorithm efficiently computes a restricted class of 2D local deformations that are characteristic between successive tissue sections. Finally, a validation framework is proposed and applied to evaluate the quality of reconstruction using both real sections and a synthetic volume.
Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner
NASA Astrophysics Data System (ADS)
Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.
2004-10-01
We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.
Diachronic 3d Reconstruction for Lost Cultural Heritage
NASA Astrophysics Data System (ADS)
Guidi, G.; Russo, M.
2011-09-01
Cultural Heritage artifacts can often be underestimated for their hidden presence in the landscape. Such problem is particularly large in countries like Italy, where the massive amount of "famous" artifacts tends to neglect other presences unless properly exposed, or when the remains are dramatically damaged leaving very few interpretation clues to the visitor. In such cases a virtual presentation of the Cultural Heritage site can be of great help, specially for explaining the evolution of its status, giving sometimes sense to few spare stones. The definition of these digital representations deal with two crucial aspects: on the one hand the possibility of 3D surveying the relics in order to have an accurate geometrical image of the current status of the artifact; on the other hand the presence of historical sources both in form of written text or images, that once properly matched with the current geometrical data, may help to recreate digitally a set of 3D models representing visually the various historical phases (diachronic model), up to the current one. The core of this article is the definition of an integrated methodology that starts from an high-resolution digital survey of the remains of an ancient building and develops a coherent virtual reconstruction from different historical sources, suggesting a scalable method suitable to be re-used for generating a 4D (geometry + time) model of the artifact. This approach has been experimented on the "Basilica di San Giovanni in Conca" in Milan, a very significant example for its complex historic evolution that combines evident historic values with an invisible presence inside the city.
3D reconstruction software comparison for short sequences
NASA Astrophysics Data System (ADS)
Strupczewski, Adam; Czupryński, BłaŻej
2014-11-01
Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.
Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza
2013-01-01
A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604
Optic flow aided navigation and 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Rollason, Malcolm
2013-10-01
An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.
NASA Astrophysics Data System (ADS)
Atkinson, C.; Buchmann, N. A.; Soria, J.
2013-11-01
Three-dimensional (3D) volumetric velocity measurement techniques, such as tomographic or holographic particle image velocimetry (PIV), rely upon the computationally intensive formation, storage and localized interrogation of multiple 3D particle intensity fields. Calculation of a single velocity field typically requires the extraction of particle intensities into tens of thousands of 3D sub-volumes or discrete particle clusters, the processing of which can significantly affect the performance of 3D cross-correlation based PIV and 3D particle tracking velocimetry (PTV). In this paper, a series of popular and customized volumetric data formats are presented and investigated using synthetic particle volumes and experimental data arising from tomographic PIV measurements of a turbulent boundary layer. Results show that the use of a sub-grid ordered non-zero intensity format with a sub-grid size of 16 × 16 × 16 points provides the best performance for cross-correlation based PIV analysis, while a particle clustered non-zero intensity format provides the best format for PTV applications. In practical tomographic PIV measurements the sub-grid ordered non-zero intensity format offered a 29% improvement in reconstruction times, while providing a 93% reduction in volume data requirements and a 28% overall improvement in cross-correlation based velocity analysis and validation times.
CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D
NASA Astrophysics Data System (ADS)
Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.
2015-08-01
Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under
3D seismic data reconstruction based on complex-valued curvelet transform in frequency domain
NASA Astrophysics Data System (ADS)
Zhang, Hua; Chen, Xiaohong; Li, Hongxing
2015-02-01
Traditional seismic data sampling must follow the Nyquist Sampling Theorem. However, the field data acquisition may not meet the sampling criteria due to missing traces or limits in exploration cost, causing a prestack data reconstruction problem. Recently researchers have proposed many useful methods to regularize the seismic data. In this paper, a 3D seismic data reconstruction method based on the Projections Onto Convex Sets (POCS) algorithm and a complex-valued curvelet transform (CCT) has been introduced in the frequency domain. In order to improve reconstruction efficiency and reduce the computation time, the seismic data are transformed from the t-x-y domain to the f-x-y domain and the data reconstruction is processed for every frequency slice during the reconstruction process. The selection threshold parameter is important for reconstruction efficiency for each iteration, therefore an exponential square root decreased (ESRD) threshold is proposed. The experimental results show that the ESRD threshold can greatly reduce iterations and improve reconstruction efficiency compared to the other thresholds for the same reconstruction result. We also analyze the antinoise ability of the CCT-based POCS reconstruction method. The example studies on synthetic and real marine seismic data showed that our proposed method is more efficient and applicable.
Computational 3-D inversion for seismic exploration
Gavrilov, E.M.; Forslund, D.W.; Fehler, M.C.
1997-10-01
This is the final report of a four-month, Laboratory Directed Research and Development (LDRD) project carried out at the Los Alamos National Laboratory (LANL). There is a great need for a new and effective technology with a wide scope of industrial applications to investigate media internal properties of which can be explored only from the backscattered data. The project was dedicated to the development of a three-dimensional computational inversion tool for seismic exploration. The new computational concept of the inversion algorithm was suggested. The goal of the project was to prove the concept and the practical validity of the algorithm for petroleum exploration.
NASA Astrophysics Data System (ADS)
Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.
2016-02-01
Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
Automatic system for 3D reconstruction of the chick eye based on digital photographs.
Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L
2012-01-01
The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA. PMID:21181572
Height inspection of wafer bumps without explicit 3D reconstruction
NASA Astrophysics Data System (ADS)
Dong, Mei; Chung, Ronald; Zhao, Yang; Lam, Edmund Y.
2006-02-01
The shrunk dimension of electronic devices leads to more stringent requirement on process control and quality assurance of their fabrication. For instance, direct die-to-die bonding requires placement of solder bumps not on PCB but on the wafer itself. Such wafer solder bumps, which are much miniaturized from the counterparts on PCB, still need to have their heights meet the specification, or else the electrical connection could be compromised, or the dies be crushed, or even the manufacturing equipments be damaged. Yet the tiny size, typically tens of microns in diameter, and the textureless and mirror nature of the bumps pose great challenge to the 3D inspection process. This paper addresses how a large number of such wafer bumps could have their heights massively checked against the specification. We assume ball bumps in this work. We propose a novel inspection measure about the collection of bump heights that possesses these advantages: (1) it is sensitive to global and local disturbances to the bump heights, thus serving the bump height inspection purpose; (2) it is invariant to how individual bumps are locally displaced against one another on the substrate surface, thus enduring 2D displacement error in soldering the bumps onto the wafer substrate; and (3) it is largely invariant to how the wafer itself is globally positioned relative to the imaging system, thus having tolerance to repeatability error in wafer placement. This measure makes use of the mirror nature of the bumps, which used to cause difficulty in traditional inspection methods, to capture images of two planes. One contains the bump peaks and the other corresponds to the substrate. With the homography matrices of these two planes and fundamental matrix of the camera, we synthesize a matrix called Biplanar Disparity Matrix. This matrix can summarize the bumps' heights in a fast and direct way without going through explicit 3D reconstruction. We also present a design of the imaging and
Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.; Karpina, M.
2016-06-01
Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.
Li, Fan; Chenoune, Yasmina; Ouenniche, Meriem; Blanc, Raphaël; Petit, Eric
2014-01-01
Diagnosis and computer-guided therapy of cerebral Arterio-Venous Malformations (AVM) require an accurate understanding of the cerebral vascular network both from structural and biomechanical point of view. We propose to obtain such information by analyzing three Dimensional Rotational Angiography (3DRA) images. In this paper, we describe a two-step process allowing 1) the 3D automatic segmentation of cerebral vessels from 3DRA images using a region-growing based algorithm and 2) the reconstruction of the segmented vessels using the 3D constrained Delaunay Triangulation method. The proposed algorithm was successfully applied to reconstruct cerebral blood vessels from ten datasets of 3DRA images. This software allows the neuroradiologist to separately analyze cerebral vessels for pre-operative interventions planning and therapeutic decision making. PMID:25571245
NASA Astrophysics Data System (ADS)
Huang, Sujuan; Wang, Duocheng; He, Chao
2012-11-01
A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.
Assist feature printability prediction by 3-D resist profile reconstruction
NASA Astrophysics Data System (ADS)
Zheng, Xin; Huang, Jensheng; Chin, Fook; Kazarian, Aram; Kuo, Chun-Chieh
2012-06-01
properties may then be used to optimize the printability vs. efficacy of an SRAF either prior to or during an Optical Proximity Correction (OPC) run. The process models that are used during OPC have never been able to reliably predict which SRAFs will print. This appears to be due to the fact that OPC process models are generally created using data that does not include printed subresolution patterns. An enhancement to compact modeling capability to predict Assist Features (AF) printability is developed and discussed. A hypsometric map representing 3-D resist profile was built by applying a first principle approximation to estimate the "energy loss" from the resist top to bottom. Such a 3-D resist profile is an extrapolation of a well calibrated traditional OPC model without any additional information. Assist features are detected at either top of resist (dark field) or bottom of resist (bright field). Such detection can be done by just extracting top or bottom resist models from our 3-D resist model. There is no measurement of assist features needed when we build AF but it can be included if interested but focusing on resist calibration to account for both exposure dosage and focus change sensitivities. This approach significantly increases resist model's capability for predicting printed SRAF accuracy. And we don't need to calibrate an SRAF model in addition to the OPC model. Without increase in computation time, this compact model can draw assist feature contour with real placement and size at any vertical plane. The result is compared and validated with 3-D rigorous modeling as well as SEM images. Since this method does not change any form of compact modeling, it can be integrated into current MBAF solutions without any additional work.
Computational modeling of RNA 3D structures and interactions.
Dawson, Wayne K; Bujnicki, Janusz M
2016-04-01
RNA molecules have key functions in cellular processes beyond being carriers of protein-coding information. These functions are often dependent on the ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is difficult, which has prompted the development of computational methods for structure prediction from sequence. Recent progress in 3D structure modeling of RNA and emerging approaches for predicting RNA interactions with ions, ligands and proteins have been stimulated by successes in protein 3D structure modeling. PMID:26689764
Kumta, Samir; Kumta, Monica; Jain, Leena; Purohit, Shrirang; Ummul, Rani
2015-01-01
Introduction: Replication of the exact three-dimensional (3D) structure of the maxilla and mandible is now a priority whilst attempting reconstruction of these bones to attain a complete functional and aesthetic rehabilitation. We hereby present the process of rapid prototyping using stereolithography to produce templates for modelling bone grafts and implants for maxilla/mandible reconstructions, its applications in tumour/trauma, and outcomes for primary and secondary reconstruction. Materials and Methods: Stereolithographic template-assisted reconstruction was used on 11 patients for the reconstruction of the mandible/maxilla primarily following tumour excision and secondarily for the realignment of post-traumatic malunited fractures or deformity corrections. Data obtained from the computed tomography (CT) scans with 1-mm resolution were converted into a computer-aided design (CAD) using the CT Digital Imaging and Communications in Medicine (DICOM) data. Once a CAD model was constructed, it was converted into a stereolithographic format and then processed by the rapid prototyping technology to produce the physical anatomical model using a resin. This resin model replicates the native mandible, which can be thus used off table as a guide for modelling the bone grafts. Discussion: This conversion of two-dimensional (2D) data from CT scan into 3D models is a very precise guide to shaping the bone grafts. Further, this CAD can reconstruct the defective half of the mandible using the mirror image principle, and the normal anatomical model can be created to aid secondary reconstructions. Conclusion: This novel approach allows a precise translation of the treatment plan directly to the surgical field. It is also an important teaching tool for implant moulding and fixation, and helps in patient counselling. PMID:26933279
Automated Reconstruction of Walls from Airborne LIDAR Data for Complete 3d Building Modelling
NASA Astrophysics Data System (ADS)
He, Y.; Zhang, C.; Awrangjeb, M.; Fraser, C. S.
2012-07-01
Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne Light Detection and Ranging (LIDAR) data with increasing point density and accuracy has been recognized as a valuable source for automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis, and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.
Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach
de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José
2015-01-01
This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796
A complete system for 3D reconstruction of roots for phenotypic analysis.
Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J
2015-01-01
Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis. PMID:25381112
NASA Astrophysics Data System (ADS)
Monserrat, Carlos; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Albalat, Salvador E.
1997-05-01
This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.
Parallel algorithm for computing 3-D reachable workspaces
NASA Astrophysics Data System (ADS)
Alameldin, Tarek K.; Sobh, Tarek M.
1992-03-01
The problem of computing the 3-D workspace for redundant articulated chains has applications in a variety of fields such as robotics, computer aided design, and computer graphics. The computational complexity of the workspace problem is at least NP-hard. The recent advent of parallel computers has made practical solutions for the workspace problem possible. Parallel algorithms for computing the 3-D workspace for redundant articulated chains with joint limits are presented. The first phase of these algorithms computes workspace points in parallel. The second phase uses workspace points that are computed in the first phase and fits a 3-D surface around the volume that encompasses the workspace points. The second phase also maps the 3- D points into slices, uses region filling to detect the holes and voids in the workspace, extracts the workspace boundary points by testing the neighboring cells, and tiles the consecutive contours with triangles. The proposed algorithms are efficient for computing the 3-D reachable workspace for articulated linkages, not only those with redundant degrees of freedom but also those with joint limits.
3D reconstruction in laparoscopy with close-range photometric stereo.
Collins, Toby; Bartoli, Adrien
2012-01-01
In this paper we present the first solution to 3D reconstruction in monocular laparoscopy using methods based on Photometric Stereo (PS). Our main contributions are to provide the new theory and practical solutions to successfully apply PS in close-range imaging conditions. We are specifically motivated by a solution with minimal hardware modification to existing laparoscopes. In fact the only physical modification we make is to adjust the colour of the laparoscope's illumination via three colour filters placed at its tip. Once calibrated, our approach can compute 3D from a single image, does not require correspondence estimation, and computes absolute depth densely. We demonstrate the potential of our approach with ground truth ex-vivo and in-vivo experimentation. PMID:23286102
First 3D reconstruction of the rhizocephalan root system using MicroCT
NASA Astrophysics Data System (ADS)
Noever, Christoph; Keiler, Jonas; Glenner, Henrik
2016-07-01
Parasitic barnacles (Cirripedia: Rhizocephala) are highly specialized parasites of crustaceans. Instead of an alimentary tract for feeding they utilize a system of roots, which infiltrates the body of their hosts to absorb nutrients. Using X-ray micro computer tomography (MicroCT) and computer-aided 3D-reconstruction, we document the spatial organization of this root system, the interna, inside the intact host and also demonstrate its use for morphological examinations of the parasites reproductive part, the externa. This is the first 3D visualization of the unique root system of the Rhizocephala in situ, showing how it is related to the inner organs of the host. We investigated the interna from different parasitic barnacles of the family Peltogastridae, which are parasitic on anomuran crustaceans. Rhizocephalan parasites of pagurid hermit crabs and lithodid crabs were analysed in this study.
Points based reconstruction and rendering of 3D shapes from large volume dataset
NASA Astrophysics Data System (ADS)
Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming
2003-05-01
In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
The spine in 3D. Computed tomographic reformation from 2D axial sections.
Virapongse, C; Gmitro, A; Sarwar, M
1986-01-01
A new program (3D83, General Electric) was used to reformat three-dimensional (3D) images from two-dimensional (2D) computed tomographic axial scans in 18 patients who had routine scans of the spine. The 3D spine images were extremely true to life and could be rotated around all three principle axes (constituting a movie), so that an illusion of head-motion parallax was created. The benefit of 3D reformation with this program is primarily for preoperative planning. It appears that 3D can also effectively determine the patency of foraminal stenosis by reformatting in hemisections. Currently this program is subject to several drawbacks that require user interaction and long reconstruction time. With further improvement, 3D reformation will find increasing clinical applicability. PMID:3787319
Web-based intermediate view reconstruction for multiview stereoscopic 3D display
NASA Astrophysics Data System (ADS)
Kim, Dong-Kyu; Lee, Won-Kyung; Ko, Jung-Hwan; Bae, Kyung-hoon; Kim, Eun-Soo
2005-08-01
In this paper, web-based intermediate view reconstruction for multiview stereoscopic 3D display system is proposed by using stereo cameras and disparity maps, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. In the proposed system, stereo images are initially captured by using stereo digital cameras and then, these are processed in the Intel Xeon server computer system. And then, the captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the information network, in which the received stereo data is displayed on the 16-view stereoscopic 3D display system by using intermediate view reconstruction. The program for controlling the overall system is developed based on the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps in real-time.
3D reconstruction of SEM images by use of optical photogrammetry software.
Eulitz, Mona; Reiss, Gebhard
2015-08-01
Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969
A multi-thread scheduling method for 3D CT image reconstruction using multi-GPU.
Zhu, Yining; Zhao, Yunsong; Zhao, Xing
2012-01-01
As a whole process, we present a concept that the complete reconstruction of CT image should include the computation part on GPUs and the data storage part on hard disks. From this point of view, we propose a Multi-Thread Scheduling (MTS) method to implement the 3D CT image reconstruction such as using FDK algorithm, to trade off the computing and storage time. In this method we use Multi-Threads to control GPUs and a separate thread to accomplish data storage, so that we make the calculation and data storage simultaneously. In addition, we use the 4-channel texture to maintain symmetrical projection data in CUDA framework, which can reduce the calculation time significantly. Numerical experiment shows that the time for the whole process with our method is almost the same as the data storage time. PMID:22635174
Uchida, Masafumi
2014-04-01
A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. PMID:24464989
Single Particle Cryo-electron Microscopy and 3-D Reconstruction of Viruses
Guo, Fei; Jiang, Wen
2014-01-01
With fast progresses in instrumentation, image processing algorithms, and computational resources, single particle electron cryo-microscopy (cryo-EM) 3-D reconstruction of icosahedral viruses has now reached near-atomic resolutions (3–4 Å). With comparable resolutions and more predictable outcomes, cryo-EM is now considered a preferred method over X-ray crystallography for determination of atomic structure of icosahedral viruses. At near-atomic resolutions, all-atom models or backbone models can be reliably built that allow residue level understanding of viral assembly and conformational changes among different stages of viral life cycle. With the developments of asymmetric reconstruction, it is now possible to visualize the complete structure of a complex virus with not only its icosahedral shell but also its multiple non-icosahedral structural features. In this chapter, we will describe single particle cryo-EM experimental and computational procedures for both near-atomic resolution reconstruction of icosahedral viruses and asymmetric reconstruction of viruses with both icosahedral and non-icosahedral structure components. Procedures for rigorous validation of the reconstructions and resolution evaluations using truly independent de novo initial models and refinements are also introduced. PMID:24357374
Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk
2015-03-15
Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.
APPROXIMATION OF SURFACES IN QUANTITATIVE 3-D RECONSTRUCTIONS
In serial section reconstructions a series of planar profiles are taken representing curves on the surface of the structure to be reconstructed. or a number of quantitative serial section methods, approximation of a surface is done by the formation of tiles between points of adja...
3D reconstruction based on CT image and its application
NASA Astrophysics Data System (ADS)
Zhang, Jianxun; Zhang, Mingmin
2004-03-01
Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.
NASA Astrophysics Data System (ADS)
Xu, Yiwen; Pickering, J. Geoffrey; Nong, Zengxuan; Gibson, Eli; Ward, Aaron D.
2014-03-01
In contrast to imaging modalities such as magnetic resonance imaging and micro computed tomography, digital histology reveals multiple stained tissue features at high resolution (0.25μm/pixel). However, the two-dimensional (2D) nature of histology challenges three-dimensional (3D) quantification and visualization of the different tissue components, cellular structures, and subcellular elements. This limitation is particularly relevant to the vasculature, which has a complex and variable structure within tissues. The objective of this study was to perform a fully automated 3D reconstruction of histology tissue in the mouse hind limb preserving the accurate systemic orientation of the tissues, stained with hematoxylin and immunostained for smooth muscle α actin. We performed a 3D reconstruction using pairwise rigid registrations of 5μm thick, paraffin-embedded serial sections, digitized at 0.25μm/pixel. Each registration was performed using the iterative closest points algorithm on blood vessel landmarks. Landmarks were vessel centroids, determined according to a signed distance map of each pixel to a decision boundary in hue-saturation-value color space; this decision boundary was determined based on manual annotation of a separate training set. Cell nuclei were then automatically extracted and corresponded to refine the vessel landmark registration. Homologous nucleus landmark pairs appearing on not more than two adjacent slides were chosen to avoid registrations which force curved or non-sectionorthogonal structures to be straight and section-orthogonal. The median accumulated target registration errors ± interquartile ranges for the vessel landmark registration, and the nucleus landmark refinement were 43.4+/-42.8μm and 2.9+/-1.7μm, respectively (p<0.0001). Fully automatic and accurate 3D rigid reconstruction of mouse hind limb histology imaging is feasible based on extracted vasculature and nuclei.
Fringe projection profilometry for panoramic 3D reconstruction
NASA Astrophysics Data System (ADS)
Almaraz-Cabral, César-Cruz; Gonzalez-Barbosa, José-Joel; Villa, Jesús; Hurtado-Ramos, Juan-Bautista; Ornelas-Rodriguez, Francisco-Javier; Córdova-Esparza, Diana-Margarita
2016-03-01
In this paper, we introduce a panoramic profilometric system to reconstruct inner cylindrical environments. The system projects circular fringes and uses a temporal phase unwrapping technique. The recovered phase map is used to reconstruct objects placed on the inner cylindrical surface. We derived a phase to depth conversion formula for this system. The use of fringe projection allows dense reconstructions. The panoramic system is composed by a digital projector, two parabolic mirrors and a CCD camera. All these components share a common axis with a reference cylinder. This paper presents results for distinct objects.
Computing 3-D structure of rigid objects using stereo and motion
NASA Technical Reports Server (NTRS)
Nguyen, Thinh V.
1987-01-01
Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.
Detectability limitations with 3-D point reconstruction algorithms using digital radiography
Lindgren, Erik
2015-03-31
The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.
Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.
Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey
2014-05-01
Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast. PMID:24770915
The New Approach to Sport Medicine: 3-D Reconstruction
ERIC Educational Resources Information Center
Ince, Alparslan
2015-01-01
The aim of this study is to present a new approach to sport medicine. Comparative analysis of the Vertebrae Lumbales was done in sedentary group and Muay Thai athletes. It was done by acquiring three dimensional (3-D) data and models through photogrammetric methods from the Multi-detector Computerized Tomography (MDCT) images of the Vertebrae…
3-D Virtual and Physical Reconstruction of Bendego Iron
NASA Astrophysics Data System (ADS)
Belmonte, S. L. R.; Zucolotto, M. E.; Fontes, R. C.; dos Santos, J. R. L.
2012-09-01
The use of 3D laser scanning to meteoritic to preserve the original shape of the meteorites before cutting and the facility of saved the datas in STL format (stereolithography) to print three-dimensional physical models and generate a digital replica.
Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision
NASA Astrophysics Data System (ADS)
Diskin, Yakov; Asari, Vijayan K.
2012-10-01
Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.
Visualization and 3D Reconstruction of Flame Cells of Taenia solium (Cestoda)
Valverde-Islas, Laura E.; Arrangoiz, Esteban; Vega, Elio; Robert, Lilia; Villanueva, Rafael; Reynoso-Ducoing, Olivia; Willms, Kaethe; Zepeda-Rodríguez, Armando; Fortoul, Teresa I.; Ambrosio, Javier R.
2011-01-01
Background Flame cells are the terminal cells of protonephridial systems, which are part of the excretory systems of invertebrates. Although the knowledge of their biological role is incomplete, there is a consensus that these cells perform excretion/secretion activities. It has been suggested that the flame cells participate in the maintenance of the osmotic environment that the cestodes require to live inside their hosts. In live Platyhelminthes, by light microscopy, the cells appear beating their flames rapidly and, at the ultrastructural, the cells have a large body enclosing a tuft of cilia. Few studies have been performed to define the localization of the cytoskeletal proteins of these cells, and it is unclear how these proteins are involved in cell function. Methodology/Principal Findings Parasites of two different developmental stages of T. solium were used: cysticerci recovered from naturally infected pigs and intestinal adults obtained from immunosuppressed and experimentally infected golden hamsters. Hamsters were fed viable cysticerci to recover adult parasites after one month of infection. In the present studies focusing on flame cells of cysticerci tissues was performed. Using several methods such as video, confocal and electron microscopy, in addition to computational analysis for reconstruction and modeling, we have provided a 3D visual rendition of the cytoskeletal architecture of Taenia solium flame cells. Conclusions/Significance We consider that visual representations of cells open a new way for understanding the role of these cells in the excretory systems of Platyhelminths. After reconstruction, the observation of high resolution 3D images allowed for virtual observation of the interior composition of cells. A combination of microscopic images, computational reconstructions and 3D modeling of cells appears to be useful for inferring the cellular dynamics of the flame cell cytoskeleton. PMID:21412407
Acceleration of EM-Based 3D CT Reconstruction Using FPGA.
Choi, Young-Kyu; Cong, Jason
2016-06-01
Reducing radiation doses is one of the key concerns in computed tomography (CT) based 3D reconstruction. Although iterative methods such as the expectation maximization (EM) algorithm can be used to address this issue, applying this algorithm to practice is difficult due to the long execution time. Our goal is to decrease this long execution time to an order of a few minutes, so that low-dose 3D reconstruction can be performed even in time-critical events. In this paper we introduce a novel parallel scheme that takes advantage of numerous block RAMs on field-programmable gate arrays (FPGAs). Also, an external memory bandwidth reduction strategy is presented to reuse both the sinogram and the voxel intensity. Moreover, a customized processing engine based on the FPGA is presented to increase overall throughput while reducing the logic consumption. Finally, a hardware and software flow is proposed to quickly construct a design for various CT machines. The complete reconstruction system is implemented on an FPGA-based server-class node. Experiments on actual patient data show that a 26.9 × speedup can be achieved over a 16-thread multicore CPU implementation. PMID:26462240
Photometric analysis as an aid to 3D reconstruction of indoor scenes
NASA Astrophysics Data System (ADS)
Serfaty, Veronique; Ackah-Miezan, Andrew; Lutton, Evelyne; Gagalowicz, Andre
1993-06-01
In an Image Understanding framework, our aim is to reconstruct an actual indoor scene from a (sequence of) color pair(s) of stereoscopic images. The desired (synthesis-oriented) description requires the analysis of both 3D geometric and photometric parameters in order to use the feedback provided by image synthesis to control the image analysis. The environment model is a hierarchy of polyhedral 3D objects (planar lambertian facets). Two main physical phenomena determine the image intensities: surface reflectance properties and light sources. From illumination models established in Computer Graphics, we derive the appropriate irradiance equations. Rather than use a point source located at infinity, we choose instead isotropic point sources with decreasing energy. This allows us to discriminate small irradiance gradients inside regions. For indoor scenes, such photometric models are more realistic, due to the presence of ceiling lights, desk lamps, and so on. Both a photometric reconstruction algorithm and a technique for localizing the 'dominant' light source are presented along with lighting simulations. For comparison purposes, corresponding artificial images are shown. Using this work, we wish to highlight the fruitful cooperation between the Vision and Graphics domains in order to perform a more accurate scene reconstruction, both photometrically and geometrically. The emphasis is on the illumination characterization which influences the scene interpretation.
Enhanced 3-D-reconstruction algorithm for C-arm systems suitable for interventional procedures.
Wiesent, K; Barth, K; Navab, N; Durlak, P; Brunner, T; Schuetz, O; Seissler, W
2000-05-01
Increasingly, three-dimensional (3-D) imaging technologies are used in medical diagnosis, for therapy planning, and during interventional procedures. We describe the possibilities of fast 3-D-reconstruction of high-contrast objects with high spatial resolution from only a small series of two-dimensional (2-D) planar radiographs. The special problems arising from the intended use of an open, mechanically unstable C-arm system are discussed. For the description of the irregular sampling geometry, homogeneous coordinates are used thoroughly. The well-known Feldkamp algorithm is modified to incorporate corresponding projection matrices without any decomposition into intrinsic and extrinsic parameters. Some approximations to speed up the whole reconstruction procedure and the tradeoff between image quality and computation time are also considered. Using standard hardware the reconstruction of a 256(3) cube is now possible within a few minutes, a time that is acceptable during interventions. Examples for cranial vessel imaging from some clinical test installations will be shown as well as promising results for bone imaging with a laboratory C-arm system. PMID:11021683
Thermal infrared exploitation for 3D face reconstruction
NASA Astrophysics Data System (ADS)
Abayowa, Bernard O.
2009-05-01
Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.
3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine
NASA Astrophysics Data System (ADS)
Hamamoto, Kazuhiko; Sato, Motoyoshi
3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.
Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction
NASA Astrophysics Data System (ADS)
Austin, Christian D.; Moses, Randolph L.
2006-05-01
This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.
3D reconstruction of tropospheric cirrus clouds by stereovision system
NASA Astrophysics Data System (ADS)
Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid
2016-07-01
A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.
Effects of camera location on the reconstruction of 3D flare trajectory with two cameras
NASA Astrophysics Data System (ADS)
Özsaraç, Seçkin; Yeşilkaya, Muhammed
2015-05-01
Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.
Method for 3D fibre reconstruction on a microrobotic platform.
Hirvonen, J; Myllys, M; Kallio, P
2016-07-01
Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. PMID:26695385
3D model tools for architecture and archaeology reconstruction
NASA Astrophysics Data System (ADS)
Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice
2016-06-01
The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.
Quantitative Reconstructions of 3D Chemical Nanostructures in Nanowires.
Rueda-Fonseca, P; Robin, E; Bellet-Amalric, E; Lopez-Haro, M; Den Hertog, M; Genuist, Y; André, R; Artioli, A; Tatarenko, S; Ferrand, D; Cibert, J
2016-03-01
Energy dispersive X-ray spectrometry is used to extract a quantitative 3D composition profile of heterostructured nanowires. The analysis of hypermaps recorded along a limited number of projections, with a preliminary calibration of the signal associated with each element, is compared to the intensity profiles calculated for a model structure with successive shells of circular, elliptic, or faceted cross sections. This discrete tomographic technique is applied to II-VI nanowires grown by molecular beam epitaxy, incorporating ZnTe and CdTe and their alloys with Mn and Mg, with typical size down to a few nanometers and Mn or Mg content as low as 10%. PMID:26837636
Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2016-06-01
In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.
ERIC Educational Resources Information Center
Matsuda, Hiroshi; Shindo, Yoshiaki
2006-01-01
The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…
Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration
NASA Astrophysics Data System (ADS)
Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.
2012-02-01
The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.
Chae, Michael P; Lin, Frank; Spychal, Robert T; Hunter-Smith, David J; Rozen, Warren Matthew
2015-02-01
In reconstructive surgery, preoperative planning is essential for optimal functional and aesthetic outcome. Creating a three-dimensional (3D) model from two-dimensional (2D) imaging data by rapid prototyping has been used in industrial design for decades but has only recently been introduced for medical application. 3D printing is one such technique that is fast, convenient, and relatively affordable. In this report, we present a case in which a reproducible method for producing a 3D-printed "reverse model" representing a skin wound defect was used for flap design and harvesting. This comprised a 82-year-old man with an exposed ankle prosthesis after serial soft tissue debridements for wound infection. Soft tissue coverage and dead-space filling were planned with a composite radial forearm free flap (RFFF). Computed tomographic angiography (CTA) of the donor site (left forearm), recipient site (right ankle), and the left ankle was performed. 2D data from the CTA was 3D-reconstructed using computer software, with a 3D image of the left ankle used as a "control." A 3D model was created by superimposing the left and right ankle images, to create a "reverse image" of the defect, and printed using a 3D printer. The RFFF was thus planned and executed effectively, without complication. To our knowledge, this is the first report of a mechanism of calculating a soft tissue wound defect and producing a 3D model that may be useful for surgical planning. 3D printing and particularly "reverse" modeling may be versatile options in reconstructive planning, and have the potential for broad application. PMID:25046728
The 3-D inelastic analyses for computational structural mechanics
NASA Technical Reports Server (NTRS)
Hopkins, D. A.; Chamis, C. C.
1989-01-01
The 3-D inelastic analysis method is a focused program with the objective to develop computationally effective analysis methods and attendant computer codes for three-dimensional, nonlinear time and temperature dependent problems present in the hot section of turbojet engine structures. Development of these methods was a major part of the Hot Section Technology (HOST) program over the past five years at Lewis Research Center.
3D digital breast tomosynthesis image reconstruction using anisotropic total variation minimization.
Seyyedi, Saeed; Yildirim, Isa
2014-01-01
This paper presents a compressed sensing based reconstruction method for 3D digital breast tomosynthesis (DBT) imaging. Algebraic reconstruction technique (ART) has been in use in DBT imaging by minimizing the isotropic total variation (TV) of the reconstructed image. The resolution in DBT differs in sagittal and axial directions which should be encountered during the TV minimization. In this study we develop a 3D anisotropic TV (ATV) minimization by considering the different resolutions in different directions. A customized 3D Shepp-logan phantom was generated to mimic a real DBT image by considering the overlapping tissue and directional resolution issues. Results of the ART, ART+3D TV and ART+3D ATV are compared using structural similarity (SSIM) diagram. PMID:25571377
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel
1993-07-01
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
Bourantas, Christos V; Kourtis, Iraklis C; Plissiti, Marina E; Fotiadis, Dimitrios I; Katsouras, Christos S; Papafaklis, Michail I; Michalis, Lampros K
2005-12-01
The aim of this study is to describe a new method for the three-dimensional reconstruction of coronary arteries and its quantitative validation. Our approach is based on the fusion of the data provided by intravascular ultrasound images (IVUS) and biplane angiographies. A specific segmentation algorithm is used for the detection of the regions of interest in intravascular ultrasound images. A new methodology is also introduced for the accurate extraction of the catheter path. In detail, a cubic B-spline is used for approximating the catheter path in each biplane projection. Each B-spline curve is swept along the normal direction of its X-ray angiographic plane forming a surface. The intersection of the two surfaces is a 3D curve, which represents the reconstructed path. The detected regions of interest in the IVUS images are placed perpendicularly onto the path and their relative axial twist is computed using the sequential triangulation algorithm. Then, an efficient algorithm is applied to estimate the absolute orientation of the first IVUS frame. In order to obtain 3D visualization the commercial package Geomagic Studio 4.0 is used. The performance of the proposed method is assessed using a validation methodology which addresses the separate validation of each step followed for obtaining the coronary reconstruction. The performance of the segmentation algorithm was examined in 80 IVUS images. The reliability of the path extraction method was studied in vitro using a metal wire model and in vivo in a dataset of 11 patients. The performance of the sequential triangulation algorithm was tested in two gutter models and in the coronary arteries (marked with metal clips) of six cadaveric sheep hearts. Finally, the accuracy in the estimation of the first IVUS frame absolute orientation was examined in the same set of cadaveric sheep hearts. The obtained results demonstrate that the proposed reconstruction method is reliable and capable of depicting the morphology of
Robust registration for removing vibrations in 3D reconstruction of web material
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Garcia, Daniel F.
2015-05-01
Vibrations are a major challenge in laser-based 3D reconstruction of web material. In uncontrolled environments, the movement of web material forward along a track is inevitably affected by vibrations. These oscillations significantly degrade the performance of the 3D reconstruction system, as they are incorrectly interpreted as irregularities on the surface of the material, leading to an erroneous reconstruction of the 3D surface. This work proposes a method to estimate and remove these vibrations based on a robust registration procedure. Registration is used to estimate vibrations and a rigid transformation is used to compensate the movements, removing the effects of vibrations on 3D reconstruction. The proposed method is applied to an extensive dataset, both synthetic and real, with very good results.
NASA Astrophysics Data System (ADS)
Vallet, B.; Soheilian, B.; Brédif, M.
2014-08-01
The 3D reconstruction of similar 3D objects detected in 2D faces a major issue when it comes to grouping the 2D detections into clusters to be used to reconstruct the individual 3D objects. Simple clustering heuristics fail as soon as similar objects are close. This paper formulates a framework to use the geometric quality of the reconstruction as a hint to do a proper clustering. We present a methodology to solve the resulting combinatorial optimization problem with some simplifications and approximations in order to make it tractable. The proposed method is applied to the reconstruction of 3D traffic signs from their 2D detections to demonstrate its capacity to solve ambiguities.
FUN3D and CFL3D Computations for the First High Lift Prediction Workshop
NASA Technical Reports Server (NTRS)
Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.
2011-01-01
Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.
3D reconstruction and spatial auralization of the "Painted Dolmen" of Antelas
NASA Astrophysics Data System (ADS)
Dias, Paulo; Campos, Guilherme; Santos, Vítor; Casaleiro, Ricardo; Seco, Ricardo; Sousa Santos, Beatriz
2008-02-01
This paper presents preliminary results on the development of a 3D audiovisual model of the Anta Pintada (painted dolmen) of Antelas, a Neolithic chamber tomb located in Oliveira de Frades and listed as Portuguese national monument. The final aim of the project is to create a highly accurate Virtual Reality (VR) model of this unique archaeological site, capable of providing not only visual but also acoustic immersion based on its actual geometry and physical properties. The project started in May 2006 with in situ data acquisition. The 3D geometry of the chamber was captured using a Laser Range Finder. In order to combine the different scans into a complete 3D visual model, reconstruction software based on the Iterative Closest Point (ICP) algorithm was developed using the Visualization Toolkit (VTK). This software computes the boundaries of the room on a 3D uniform grid and populates its interior with "free-space nodes", through an iterative algorithm operating like a torchlight illuminating a dark room. The envelope of the resulting set of "free-space nodes" is used to generate a 3D iso-surface approximating the interior shape of the chamber. Each polygon of this surface is then assigned the acoustic absorption coefficient of the corresponding boundary material. A 3D audiovisual model operating in real-time was developed for a VR Environment comprising head-mounted display (HMD) I-glasses SVGAPro, an orientation sensor (tracker) InterTrax 2 with 3 Degrees Of Freedom (3DOF) and stereo headphones. The auralisation software is based on a geometric model. This constitutes a first approach, since geometric acoustics have well-known limitations in rooms with irregular surfaces. The immediate advantage lies in their inherent computational efficiency, which allows real-time operation. The program computes the early reflections forming the initial part of the chamber's impulse response (IR), which carry the most significant cues for source localisation. These early
Bayesian 3D velocity field reconstruction with VIRBIUS
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem
2016-03-01
I describe a new Bayesian-based algorithm to infer the full three dimensional velocity field from observed distances and spectroscopic galaxy catalogues. In addition to the velocity field itself, the algorithm reconstructs true distances, some cosmological parameters and specific non-linearities in the velocity field. The algorithm takes care of selection effects, miscalibration issues and can be easily extended to handle direct fitting of e.g. the inverse Tully-Fisher relation. I first describe the algorithm in details alongside its performances. This algorithm is implemented in the VIRBIUS (VelocIty Reconstruction using Bayesian Inference Software) software package. I then test it on different mock distance catalogues with a varying complexity of observational issues. The model proved to give robust measurement of velocities for mock catalogues of 3000 galaxies. I expect the core of the algorithm to scale to tens of thousands galaxies. It holds the promises of giving a better handle on future large and deep distance surveys for which individual errors on distance would impede velocity field inference.
A 3D endoscopy reconstruction as a saliency map for analysis of polyp shapes
NASA Astrophysics Data System (ADS)
Ruano, Josue; Martínez, Fabio; Gómez, Martín.; Romero, Eduardo
2015-01-01
A first diagnosis of colorectal cancer is performed by examination of polyp shape and appearance during an endoscopy routine procedure. However, the video-endoscopy is highly noisy because exacerbated physiological conditions like increased motility or secretion may limit the visual analysis of lesions. In this work a 3D reconstruction of the digestive tract is proposed, facilitating the polyp shape evaluation by highlighting its surface geometry and allowing an analysis from different perspectives. The method starts by a spatio-temporal map, constructed to group the different regions of the tract by their similar dynamic patterns during the sequence. Then, such map was convolved with a second derivative of a Gaussian kernel that emulates the camera distortion and allows to highlight the polyp surface. The position initialization in each frame of the kernel was computed from expert manual delineation and propagated along the sequence based on. Results show reliable reconstructions, with a salient 3D polyp structure that can then be better observed.
Reconstructing 3-D Ship Motion for Synthetic Aperture Sonar Processing
NASA Astrophysics Data System (ADS)
Thomsen, D. R.; Chadwell, C. D.; Sandwell, D.
2004-12-01
We are investigating the feasibility of coherent ping-to-ping processing of multibeam sonar data for high-resolution mapping and change detection in the deep ocean. Theoretical calculations suggest that standard multibeam resolution can be improved from 100 m to ~10 m through coherent summation of pings similar to synthetic aperture radar image formation. A requirement for coherent summation of pings is to correct the phase of the return echoes to an accuracy of ~3 cm at a sampling rate of ~10 Hz. In September of 2003, we conducted a seagoing experiment aboard R/V Revelle to test these ideas. Three geodetic-quality GPS receivers were deployed to recover 3-D ship motion to an accuracy of +- 3cm at a 1 Hz sampling rate [Chadwell and Bock, GRL, 2001]. Additionally, inertial navigation data (INS) from fiber-optic gyroscopes and pendulum-type accelerometers were collected at a 10 Hz rate. Independent measurements of ship orientation (yaw, pitch, and roll) from the GPS and INS show agreement to an RMS accuracy of better than 0.1 degree. Because inertial navigation hardware is susceptible to drift, these measurements were combined with the GPS to achieve both high accuracy and high sampling rate. To preserve the short-timescale accuracy of the INS and the long-timescale accuracy of the GPS measurements, time-filtered differences between the GPS and INS were subtracted from the INS integrated linear velocities. An optimal filter length of 25 s was chosen to force the RMS difference between the GPS and the integrated INS to be on the order of the accuracy of the GPS measurements. This analysis provides an upper bound on 3-D ship motion accuracy. Additionally, errors in the attitude can translate to the projections of motion for individual hydrophones. With lever arms on the order of 5m, these errors will likely be ~1mm. Based on these analyses, we expect to achieve the 3-cm accuracy requirement. Using full-resolution hydrophone data collected by a SIMRAD EM/120 echo sounder
Computer generated holograms of 3D objects with reduced number of projections
NASA Astrophysics Data System (ADS)
Huang, Su-juan; Liu, Dao-jin; Zhao, Jing-jing
2010-11-01
A new method for synthesizing computer-generated holograms of 3D objects has been proposed with reduced number of projections. According to the principles of paraboloid of revolution in 3D Fourier space, spectra information of 3D objects is gathered from projection images. We record a series of real projection images of 3D objects under incoherent white-light illumination by circular scanning method, and synthesize interpolated projection images by motion estimation and compensation between adjacent real projection images, then extract the spectra information of the 3D objects from all projection images in circle form. Because of quantization error, information extraction in two circles form is better than in single circle. Finally hologram is encoded based on computer-generated holography using a conjugate-symmetric extension. Our method significantly reduces the number of required real projections without increasing much of the computing time of the hologram and degrading the reconstructed image. Numerical reconstruction of the hologram shows good results.
3D endobronchial ultrasound reconstruction and analysis for multimodal image-guided bronchoscopy
NASA Astrophysics Data System (ADS)
Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher R.; Toth, Jennifer W.; Higgins, William E.
2014-03-01
State-of-the-art image-guided intervention (IGI) systems for lung-cancer management draw upon high-resolution three-dimensional multi-detector computed-tomography (MDCT) images and bronchoscopic video. An MDCT scan provides a high-resolution three-dimensional (3D) image of the chest that is used for preoperative procedure planning, while bronchoscopy gives live intraoperative video of the endobronchial airway tree structure. However, because neither source provides live extraluminal information on suspect nodules or lymph nodes, endobronchial ultrasound (EBUS) is often introduced during a procedure. Unfortunately, existing IGI systems provide no direct synergistic linkage between the MDCT/video data and EBUS data. Hence, EBUS proves difficult to use and can lead to inaccurate interpretations. To address this drawback, we present a prototype of a multimodal IGI system that brings together the various image sources. The system enables 3D reconstruction and visualization of structures depicted in the 2D EBUS video stream. It also provides a set of graphical tools that link the EBUS data directly to the 3D MDCT and bronchoscopic video. Results using phantom and human data indicate that the new system could potentially enable smooth natural incorporation of EBUS into the system-level work flow of bronchoscopy.
An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2015-12-01
Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.
Lei Liu; Feng Zhou; Xue-Ru Bai; Ming-Liang Tao; Zi-Jing Zhang
2016-04-01
Traditionally, the factorization method is applied to reconstruct the 3D geometry of a target from its sequential inverse synthetic aperture radar images. However, this method requires performing cross-range scaling to all the sub-images and thus has a large computational burden. To tackle this problem, this paper proposes a novel method for joint cross-range scaling and 3D geometry reconstruction of steadily moving targets. In this method, we model the equivalent rotational angular velocity (RAV) by a linear polynomial with time, and set its coefficients randomly to perform sub-image cross-range scaling. Then, we generate the initial trajectory matrix of the scattering centers, and solve the 3D geometry and projection vectors by the factorization method with relaxed constraints. After that, the coefficients of the polynomial are estimated from the projection vectors to obtain the RAV. Finally, the trajectory matrix is re-scaled using the estimated rotational angle, and accurate 3D geometry is reconstructed. The two major steps, i.e., the cross-range scaling and the factorization, are performed repeatedly to achieve precise 3D geometry reconstruction. Simulation results have proved the effectiveness and robustness of the proposed method. PMID:26886991
Image-Based 3d Reconstruction and Analysis for Orthodontia
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2012-08-01
Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.
3D surface reconstruction based on image stitching from gastric endoscopic video sequence
NASA Astrophysics Data System (ADS)
Duan, Mengyao; Xu, Rong; Ohya, Jun
2013-09-01
This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.
Automatic Texture Reconstruction of 3d City Model from Oblique Images
NASA Astrophysics Data System (ADS)
Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang
2016-06-01
In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.
Nguyen, Duc V; Vo, Quang N; Le, Lawrence H; Lou, Edmond H M
2015-02-01
Adolescent idiopathic scoliosis (AIS) is a three-dimensional deformity of spine associated with vertebra rotation. The Cobb angle and axial vertebral rotation are important parameters to assess the severity of scoliosis. However, the vertebral rotation is seldom measured from radiographs due to time consuming. Different techniques have been developed to extract 3D spinal information. Among many techniques, ultrasound imaging is a promising method. This pilot study reported an image processing method to reconstruct the posterior surface of vertebrae from 3D ultrasound data. Three cadaver vertebrae, a Sawbones spine phantom, and a spine from a child with AIS were used to validate the development. The in-vitro result showed the surface of the reconstructed image was visually similar to the original objects. The dimension measurement error was <5 mm and the Pearson correlation was >0.99. The results also showed a high accuracy in vertebral rotation with errors of 0.8 ± 0.3°, 2.8 ± 0.3° and 3.6 ± 0.5° for the rotation values of 0°, 15° and 30°, respectively. Meanwhile, the difference in the Cobb angle between the phantom and the image was 4° and the vertebral rotation at the apex was 2°. The Cobb angle measured from the in-vivo ultrasound image was 4° different from the radiograph. PMID:25550193
Computations of Emissions Using a 3-D Combustor Program
NASA Technical Reports Server (NTRS)
Srivatsa, S. K.
1983-01-01
A general 3-D combustor performance program developed by Garrett was extended to predict soot and NOx emissions. The soot formation and oxidation rates were computed by quasi-global models, taking into account the influence of turbulence. Radiation heat transfer was computed by the six-flux radiation mode. The radiation properties include the influence of CO2 and H2O in addition to soot. NOx emissions were computed from a global four-step hydrocarbon oxidation scheme and a set of rate-controlled reactions involving radicals and nitrogen oxides.
NASA Astrophysics Data System (ADS)
Yang, R.; Song, A.; Li, X. D.; Lu, Y.; Yan, R.; Xu, B.; Li, X.
2014-10-01
A 3D reconstruction solution to ultrasound Joule heat density tomography based on acousto-electric effect by deconvolution is proposed for noninvasive imaging of biological tissue. Compared with ultrasound current source density imaging, ultrasound Joule heat density tomography doesn't require any priori knowledge of conductivity distribution and lead fields, so it can gain better imaging result, more adaptive to environment and with wider application scope. For a general 3D volume conductor with broadly distributed current density field, in the AE equation the ultrasound pressure can't simply be separated from the 3D integration, so it is not a common modulation and basebanding (heterodyning) method is no longer suitable to separate Joule heat density from the AE signals. In the proposed method the measurement signal is viewed as the output of Joule heat density convolving with ultrasound wave. As a result, the internal 3D Joule heat density can be reconstructed by means of Wiener deconvolution. A series of computer simulations set for breast cancer imaging applications, with consideration of ultrasound beam diameter, noise level, conductivity contrast, position dependency and size of simulated tumors, have been conducted to evaluate the feasibility and performance of the proposed reconstruction method. The computer simulation results demonstrate that high spatial resolution 3D ultrasound Joule heat density imaging is feasible using the proposed method, and it has potential applications to breast cancer detection and imaging of other organs.
NASA's 3D Flight Computer for Space Applications
NASA Technical Reports Server (NTRS)
Alkalai, Leon
2000-01-01
The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).
NASA Astrophysics Data System (ADS)
Wang, Wenli; Hawkins, William; Gagnon, Daniel
2004-06-01
A single photon emission computed tomography (SPECT) rotating slat collimator with strip detector acquires distance-weighted plane integral data, along with the attenuation factor and distance-dependent detector response. In order to image a 3D object, the slat collimator device has first to spin around its axis and then rotate around the object to produce 3D projection measurements. Compared to the slice-by-slice 2D reconstruction for the parallel-hole collimator and line integral data, a more complex 3D reconstruction is needed for the slat collimator and plane integral data. In this paper, we propose a 3D RBI-EM reconstruction algorithm with spherically-symmetric basis function, also called 'blobs', for the slat collimator. It has a closed and spherically symmetric analytical expression for the 3D Radon transform, which makes it easier to compute the plane integral than the voxel. It is completely localized in the spatial domain and nearly band-limited in the frequency domain. Its size and shape can be controlled by several parameters to have desired reconstructed image quality. A mathematical lesion phantom study has demonstrated that the blob reconstruction can achieve better contrast-noise trade-offs than the voxel reconstruction without greatly degrading the image resolution. A real lesion phantom study further confirmed this and showed that a slat collimator with CZT detector has better image quality than the conventional parallel-hole collimator with NaI detector. The improvement might be due to both the slat collimation and the better energy resolution of the CZT detector.
Visualization of 3D elbow kinematics using reconstructed bony surfaces
NASA Astrophysics Data System (ADS)
Lalone, Emily A.; McDonald, Colin P.; Ferreira, Louis M.; Peters, Terry M.; King, Graham J. W.; Johnson, James A.
2010-02-01
An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator. Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.
Advanced computational tools for 3-D seismic analysis
Barhen, J.; Glover, C.W.; Protopopescu, V.A.
1996-06-01
The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.
GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.
Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H
2012-09-01
Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC
3D Reconstruction of a Rotating Erupting Prominence
NASA Technical Reports Server (NTRS)
Thompson, W. T.; Kliem, B.; Torok, T.
2011-01-01
A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight as it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 Angstrom line in EUVI and in the H alpha-sensitive bandpass of COR1 by both STEREO Ahead and Behind. These features could then be traced out in three dimensional space, and reprojected into a view in which the eruption is directed towards the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equals 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg. from the original filament orientation inferred from H alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation is reached within approximately equals 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.
3D Reconstruction of a Rotating Erupting Prominence
NASA Technical Reports Server (NTRS)
Thompson, W. T.; Kliem, B.; Toeroek, T.
2011-01-01
A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight a it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 A line in EUVI and in the H-alpha-sensitive bandpass of COR I by both STEREO Ahead and Behind. These features could then be traced out in three-dimensional space, and reprojected into a view in which the eruption is directed toward the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equal to 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg from the original filament orientation inferred from H-alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation are reached within approximately equal to 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.
Majority logic gate for 3D magnetic computing.
Eichwald, Irina; Breitkreutz, Stephan; Ziemys, Grazvydas; Csaba, György; Porod, Wolfgang; Becherer, Markus
2014-08-22
For decades now, microelectronic circuits have been exclusively built from transistors. An alternative way is to use nano-scaled magnets for the realization of digital circuits. This technology, known as nanomagnetic logic (NML), may offer significant improvements in terms of power consumption and integration densities. Further advantages of NML are: non-volatility, radiation hardness, and operation at room temperature. Recent research focuses on the three-dimensional (3D) integration of nanomagnets. Here we show, for the first time, a 3D programmable magnetic logic gate. Its computing operation is based on physically field-interacting nanometer-scaled magnets arranged in a 3D manner. The magnets possess a bistable magnetization state representing the Boolean logic states '0' and '1.' Magneto-optical and magnetic force microscopy measurements prove the correct operation of the gate over many computing cycles. Furthermore, micromagnetic simulations confirm the correct functionality of the gate even for a size in the nanometer-domain. The presented device demonstrates the potential of NML for three-dimensional digital computing, enabling the highest integration densities. PMID:25073985
Majority logic gate for 3D magnetic computing
NASA Astrophysics Data System (ADS)
Eichwald, Irina; Breitkreutz, Stephan; Ziemys, Grazvydas; Csaba, György; Porod, Wolfgang; Becherer, Markus
2014-08-01
For decades now, microelectronic circuits have been exclusively built from transistors. An alternative way is to use nano-scaled magnets for the realization of digital circuits. This technology, known as nanomagnetic logic (NML), may offer significant improvements in terms of power consumption and integration densities. Further advantages of NML are: non-volatility, radiation hardness, and operation at room temperature. Recent research focuses on the three-dimensional (3D) integration of nanomagnets. Here we show, for the first time, a 3D programmable magnetic logic gate. Its computing operation is based on physically field-interacting nanometer-scaled magnets arranged in a 3D manner. The magnets possess a bistable magnetization state representing the Boolean logic states ‘0’ and ‘1.’ Magneto-optical and magnetic force microscopy measurements prove the correct operation of the gate over many computing cycles. Furthermore, micromagnetic simulations confirm the correct functionality of the gate even for a size in the nanometer-domain. The presented device demonstrates the potential of NML for three-dimensional digital computing, enabling the highest integration densities.
A simple approach for 3D reconstruction of the spine from biplanar radiography
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Shi, Xinling; Lv, Liang; Guo, Fei; Zhang, Yufeng
2014-04-01
This paper proposed a simple approach for 3D spinal reconstruction from biplanar radiography. The proposed reconstruction consisted in reconstructing the 3D central curve of the spine based on the epipolar geometry and automatically aligning vertebrae under the constraint of this curve. The vertebral orientations were adjusted by matching the projections of the 3D pedicles with the 2D pedicles in biplanar radiographs. The user interaction time was within one minute for a thoracic spine. Sixteen pairs of radiographs of a thoracic spinal model were used to evaluate the precision and accuracy. The precision was within 3.1 mm for the location and 3.5° for the orientation. The accuracy was within 3.5 mm for the location and 3.9° for the orientation. These results demonstrate that this approach can be a promising tool to obtain the 3D spinal geometry with acceptable user interactions in scoliotic clinics.
3D reconstruction of a human heart fascicle using SurfDriver
NASA Astrophysics Data System (ADS)
Rader, Robert J.; Phillips, Steven J.; LaFollette, Paul S., Jr.
2000-06-01
The Temple University Medical School has a sequence of over 400 serial sections of adult normal ventricular human heart tissue, cut at 25 micrometer thickness. We used a Zeiss Ultraphot with a 4x planapo objective and a Pixera digital camera to make a series of 45 sequential montages to use in the 3D reconstruction of a fascicle (muscle bundle). We wrote custom software to merge 4 smaller image fields from each section into one composite image. We used SurfDriver software, developed by Scott Lozanoff of the University of Hawaii and David Moody of the University of Alberta, for registration, object boundary identification, and 3D surface reconstruction. We used an Epson Stylus Color 900 printer to get photo-quality prints. We describe the challenge and our solution to the following problems: image acquisition and digitization, image merge, alignment and registration, boundary identification, 3D surface reconstruction, 3D visualization and orientation, snapshot, and photo-quality prints.
A fast 3D reconstruction system with a low-cost camera accessory
Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.
2015-01-01
Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407
A fast 3D reconstruction system with a low-cost camera accessory
NASA Astrophysics Data System (ADS)
Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.
2015-06-01
Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
2-D and 3-D computations of curved accelerator magnets
Turner, L.R.
1991-01-01
In order to save computer memory, a long accelerator magnet may be computed by treating the long central region and the end regions separately. The dipole magnets for the injector synchrotron of the Advanced Photon Source (APS), now under construction at Argonne National Laboratory (ANL), employ magnet iron consisting of parallel laminations, stacked with a uniform radius of curvature of 33.379 m. Laplace's equation for the magnetic scalar potential has a different form for a straight magnet (x-y coordinates), a magnet with surfaces curved about a common center (r-{theta} coordinates), and a magnet with parallel laminations like the APS injector dipole. Yet pseudo 2-D computations for the three geometries give basically identical results, even for a much more strongly curved magnet. Hence 2-D (x-y) computations of the central region and 3-D computations of the end regions can be combined to determine the overall magnetic behavior of the magnets. 1 ref., 6 figs.
3D reconstruction for sinusoidal motion based on different feature detection algorithms
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhang, Jin; Deng, Huaxia; Yu, Liandong
2015-02-01
The dynamic testing of structures and components is an important area of research. Extensive researches on the methods of using sensors for vibration parameters have been studied for years. With the rapid development of industrial high-speed camera and computer hardware, the method of using stereo vision for dynamic testing has been the focus of the research since the advantages of non-contact, full-field, high resolution and high accuracy. But in the country there is not much research about the dynamic testing based on stereo vision, and yet few people publish articles about the three-dimensional (3D) reconstruction of feature points in the case of dynamic. It is essential to the following analysis whether it can obtain accurate movement of target objects. In this paper, an object with sinusoidal motion is detected by stereo vision and the accuracy with different feature detection algorithms is investigated. Three different marks including dot, square and circle are stuck on the object and the object is doing sinusoidal motion by vibration table. Then use feature detection algorithm speed-up robust feature (SURF) to detect point, detect square corners by Harris and position the center by Hough transform. After obtaining the pixel coordinate values of the feature point, the stereo calibration parameters are used to achieve three-dimensional reconstruction through triangulation principle. The trajectories of the specific direction according to the vibration frequency and the frequency camera acquisition are obtained. At last, the reconstruction accuracy of different feature detection algorithms is compared.
X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers
NASA Astrophysics Data System (ADS)
Willey, T. M.; Champley, K.; Hodgin, R.; Lauderbach, L.; Bagge-Hansen, M.; May, C.; Sanchez, N.; Jensen, B. J.; Iverson, A.; van Buuren, T.
2016-06-01
Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ˜80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.
Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction
NASA Astrophysics Data System (ADS)
Yu, Qian; Helmholz, Petra; Belton, David
2016-06-01
In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.
3D reconstruction of a building from LIDAR data with first-and-last echo information
NASA Astrophysics Data System (ADS)
Zhang, Guoning; Zhang, Jixian; Yu, Jie; Yang, Haiquan; Tan, Ming
2007-11-01
With the aerial LIDAR technology developing, how to automatically recognize and reconstruct the buildings from LIDAR dataset is an important research topic along with the widespread applications of LIDAR data in city modeling, urban planning, etc.. Applying the information of the first-and-last echo data of the same laser point, in this paper, a scheme of 3D-reconstruction of simple building has been presented, which mainly include the following steps: the recognition of non-boundary building points and boundary building points and the generation of each building-point-cluster; the localization of the boundary of each building; the detection of the planes included in each cluster and the reconstruction of building in 3D form. Through experiment, it can be proved that for the LIDAR data with first-and-last echo information the scheme can effectively and efficiently 3D-reconstruct simple buildings, such as flat and gabled buildings.
3D shape reconstruction of medical images using a perspective shape-from-shading method
NASA Astrophysics Data System (ADS)
Yang, Lei; Han, Jiu-qiang
2008-06-01
A 3D shape reconstruction approach for medical images using a shape-from-shading (SFS) method was proposed in this paper. A new reflectance map equation of medical images was analyzed with the assumption that the Lambertian reflectance surface was irradiated by a point light source located at the light center and the image was formed under perspective projection. The corresponding static Hamilton-Jacobi (H-J) equation of the reflectance map equation was established. So the shape-from-shading problem turned into solving the viscosity solution of the static H-J equation. Then with the conception of a viscosity vanishing approximation, the Lax-Friedrichs fast sweeping numerical method was used to compute the viscosity solution of the H-J equation and a new iterative SFS algorithm was gained. Finally, experiments on both synthetic images and real medical images were performed to illustrate the efficiency of the proposed SFS method.
Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi
2016-01-01
Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans. PMID:26832374
NASA Astrophysics Data System (ADS)
Bourrion, O.; Bosson, G.; Grignon, C.; Bouly, J. L.; Richer, J. P.; Guillaudin, O.; Mayet, F.; Billard, J.; Santos, D.
2011-11-01
Directional detection of non-baryonic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A gaseous micro-TPC matrix, filled with either 3He, CF4 or C4H10 has been developed within the MIMAC project. A dedicated acquisition electronics and a real time track reconstruction software have been developed to monitor a 512 channel prototype. This auto-triggered electronic uses embedded processing to reduce the data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
NASA Astrophysics Data System (ADS)
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and
3-D Monte Carlo-Based Scatter Compensation in Quantitative I-131 SPECT Reconstruction
Dewaraja, Yuni K.; Ljungberg, Michael; Fessler, Jeffrey A.
2010-01-01
We have implemented highly accurate Monte Carlo based scatter modeling (MCS) with 3-D ordered subsets expectation maximization (OSEM) reconstruction for I-131 single photon emission computed tomography (SPECT). The scatter is included in the statistical model as an additive term and attenuation and detector response are included in the forward/backprojector. In the present implementation of MCS, a simple multiple window-based estimate is used for the initial iterations and in the later iterations the Monte Carlo estimate is used for several iterations before it is updated. For I-131, MCS was evaluated and compared with triple energy window (TEW) scatter compensation using simulation studies of a mathematical phantom and a clinically realistic voxel-phantom. Even after just two Monte Carlo updates, excellent agreement was found between the MCS estimate and the true scatter distribution. Accuracy and noise of the reconstructed images were superior with MCS compared to TEW. However, the improvement was not large, and in some cases may not justify the large computational requirements of MCS. Furthermore, it was shown that the TEW correction could be improved for most of the targets investigated here by applying a suitably chosen scaling factor to the scatter estimate. Finally clinical application of MCS was demonstrated by applying the method to an I-131 radioimmunotherapy (RIT) patient study. PMID:20104252
Ahmad, Rizwan; Deng, Yuanmu; Vikram, Deepti S.; Clymer, Bradley; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan
2007-01-01
In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstructed image along with fast and reliable data acquisition is highly desirable for many biological applications. An accurate representation of uniform distribution of projection data is necessary to ensure high reconstruction quality. The current techniques for data acquisition suffer from nonuniformities or local anisotropies in the distribution of projection data and present a poor approximation of a true uniform and isotropic distribution. In this work, we have implemented a technique based on Quasi-Monte Carlo method to acquire projections with more uniform and isotropic distribution of data over a 3D acquisition space. The proposed technique exhibits improvements in the reconstruction quality in terms of both mean-square-error and visual judgment. The effectiveness of the suggested technique is demonstrated using computer simulations and 3D EPRI experiments. The technique is robust and exhibits consistent performance for different object configurations and orientations. PMID:17095271