3D medical volume reconstruction using web services.
Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter
2008-04-01
We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope. PMID:18336808
Incremental volume reconstruction and rendering for 3-D ultrasound imaging
NASA Astrophysics Data System (ADS)
Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry
1992-09-01
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
A new method to combine 3D reconstruction volumes for multiple parallel circular cone beam orbits
Baek, Jongduk; Pelc, Norbert J.
2010-01-01
Purpose: This article presents a new reconstruction method for 3D imaging using a multiple 360° circular orbit cone beam CT system, specifically a way to combine 3D volumes reconstructed with each orbit. The main goal is to improve the noise performance in the combined image while avoiding cone beam artifacts. Methods: The cone beam projection data of each orbit are reconstructed using the FDK algorithm. When at least a portion of the total volume can be reconstructed by more than one source, the proposed combination method combines these overlap regions using weighted averaging in frequency space. The local exactness and the noise performance of the combination method were tested with computer simulations of a Defrise phantom, a FORBILD head phantom, and uniform noise in the raw data. Results: A noiseless simulation showed that the local exactness of the reconstructed volume from the source with the smallest tilt angle was preserved in the combined image. A noise simulation demonstrated that the combination method improved the noise performance compared to a single orbit reconstruction. Conclusions: In CT systems which have overlap volumes that can be reconstructed with data from more than one orbit and in which the spatial frequency content of each reconstruction can be calculated, the proposed method offers improved noise performance while keeping the local exactness of data from the source with the smallest tilt angle. PMID:21089770
Breast mass detection using slice conspicuity in 3D reconstructed digital breast volumes
NASA Astrophysics Data System (ADS)
Kim, Seong Tae; Kim, Dae Hoe; Ro, Yong Man
2014-09-01
In digital breast tomosynthesis, the three dimensional (3D) reconstructed volumes only provide quasi-3D structure information with limited resolution along the depth direction due to insufficient sampling in depth direction and the limited angular range. The limitation could seriously hamper the conventional 3D image analysis techniques for detecting masses because the limited number of projection views causes blurring in the out-of-focus planes. In this paper, we propose a novel mass detection approach using slice conspicuity in the 3D reconstructed digital breast volumes to overcome the above limitation. First, to overcome the limited resolution along the depth direction, we detect regions of interest (ROIs) on each reconstructed slice and separately utilize the depth directional information to combine the ROIs effectively. Furthermore, we measure the blurriness of each slice for resolving the degradation of performance caused by the blur in the out-of-focus plane. Finally, mass features are extracted from the selected in focus slices and analyzed by a support vector machine classifier to reduce the false positives. Comparative experiments have been conducted on a clinical data set. Experimental results demonstrate that the proposed approach outperforms the conventional 3D approach by achieving a high sensitivity with a small number of false positives.
NASA Astrophysics Data System (ADS)
Khongsomboon, Khamphong; Hamamoto, Kazuhiko; Kondo, Shozo
3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the thechnique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians.
3D volume reconstruction of a mouse brain histological sections using warp filtering
Ju, Tao; Warren, Joe; Carson, James P.; Bello, Musodiq; Kakadiaris, Ioannis; Chiu, Wah; Thaller, Christina; Eichele, Gregor
2006-09-30
Sectioning tissues for optical microscopy often introduces upon the resulting sections distortions that make 3D reconstruction difficult. Here we present an automatic method for producing a smooth 3D volume from distorted 2D sections in the absence of any undistorted references. The method is based on pairwise elastic image warps between successive tissue sections, which can be computed by 2D image registration. Using a Gaussian filter, an average warp is computed for each section from the pairwise warps in a group of its neighboring sections. The average warps deform each section to match its neighboring sections, thus creating a smooth volume where corresponding features on successive sections lie close to each other. The proposed method can be used with any existing 2D image registration method for 3D reconstruction. In particular, we present a novel image warping algorithm based on dynamic programming that extends Dynamic Time Warping in 1D speech recognition to compute pairwise warps between high-resolution 2D images. The warping algorithm efficiently computes a restricted class of 2D local deformations that are characteristic between successive tissue sections. Finally, a validation framework is proposed and applied to evaluate the quality of reconstruction using both real sections and a synthetic volume.
Points based reconstruction and rendering of 3D shapes from large volume dataset
NASA Astrophysics Data System (ADS)
Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming
2003-05-01
In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.
Reconstruction Error of Calibration Volume's Coordinates for 3D Swimming Kinematics.
Figueiredo, Pedro; Machado, Leandro; Vilas-Boas, João Paulo; Fernandes, Ricardo J
2011-09-01
The aim of this study was to investigate the accuracy and reliability of above and underwater 3D reconstruction of three calibration volumes with different control points disposal (#1 - on vertical and horizontal rods; #2 - on vertical and horizontal rods and facets; #3 - on crossed horizontal rods). Each calibration volume (3 × 2 × 3 m) was positioned in a 25 m swimming pool (half above and half below the water surface) and recorded with four underwater and two above water synchronised cameras (50 Hz). Reconstruction accuracy was determined calculating the RMS error of twelve validation points. The standard deviation across all digitisation of the same marker was used for assessing the reliability estimation. Comparison among different number of control points showed that the set of 24 points produced the most accurate results. The volume #2 presented higher accuracy (RMS errors: 5.86 and 3.59 mm for x axis, 3.45 and 3.11 mm for y axis and 4.38 and 4.00 mm for z axis, considering under and above water, respectively) and reliability (SD: underwater cameras ± [0.2; 0.6] mm; above water cameras ± [0.2; 0.3] mm) that may be considered suitable for 3D swimming kinematic analysis. Results revealed that RMS error was greater during underwater analysis, possibly due to refraction. PMID:23486761
Automated breast mass detection in 3D reconstructed tomosynthesis volumes: a featureless approach.
Singh, Swatee; Tourassi, Georgia D; Baker, Jay A; Samei, Ehsan; Lo, Joseph Y
2008-08-01
The purpose of this study was to propose and implement a computer aided detection (CADe) tool for breast tomosynthesis. This task was accomplished in two stages-a highly sensitive mass detector followed by a false positive (FP) reduction stage. Breast tomosynthesis data from 100 human subject cases were used, of which 25 subjects had one or more mass lesions and the rest were normal. For stage 1, filter parameters were optimized via a grid search. The CADe identified suspicious locations were reconstructed to yield 3D CADe volumes of interest. The first stage yielded a maximum sensitivity of 93% with 7.7 FPs/breast volume. Unlike traditional CADe algorithms in which the second stage FP reduction is done via feature extraction and analysis, instead information theory principles were used with mutual information as a similarity metric. Three schemes were proposed, all using leave-one-case-out cross validation sampling. The three schemes, A, B, and C, differed in the composition of their knowledge base of regions of interest (ROIs). Scheme A's knowledge base was comprised of all the mass and FP ROIs generated by the first stage of the algorithm. Scheme B had a knowledge base that contained information from mass ROIs and randomly extracted normal ROIs. Scheme C had information from three sources of information-masses, FPs, and normal ROIs. Also, performance was assessed as a function of the composition of the knowledge base in terms of the number of FP or normal ROIs needed by the system to reach optimal performance. The results indicated that the knowledge base needed no more than 20 times as many FPs and 30 times as many normal ROIs as masses to attain maximal performance. The best overall system performance was 85% sensitivity with 2.4 FPs per breast volume for scheme A, 3.6 FPs per breast volume for scheme B, and 3 FPs per breast volume for scheme C. PMID:18777923
3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies
Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno
2016-01-01
We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628
Determining gully volume from straightforward photo-based 3D reconstruction
NASA Astrophysics Data System (ADS)
James, M. R.; Castillo, C.; Pérez, R.; Taguas, E. V.; Gomez, J. A.; Quinton, J. N.
2012-04-01
In order to quantify soil loss through gully erosion, accurate measurements of gully volume are required. However, gullys are usually extended features, often with complex morphologies and are challenging to survey appropriately and efficiently. Here we explore the use of a photo-based technique for deriving 3D gully models suitable for detailed erosion studies. Traditional aerial and oblique close-range photogrammetry approaches have been previously used to produce accurate digital elevation models (DEMs) from photographs. However, these techniques require expertise to carry out successfully, use proprietry software and usually need apriori camera calibration. The computer vision approach we adopt here relaxes these requirements and allows 3D models to be automatically produced from collections of unordered photos. We use a freely available 'reconstruction pipeline' (http://blog.neonascent.net/archives/bundler-photogrammetry-package/) that combines structure-from-motion and multi-view stereo algorithms (SfM-MVS) to generate dense point clouds (millions of points). The model is derived from photos taken from different positions with a consumer camera and is then scaled and georeferenced using additional software (http://www.lancs.ac.uk/staff/jamesm/software/sfm_georef.htm) and observations of some control points in the scene. The approach was tested on a ~7-m long sinous gully section (average width and depth ~2.4 and 1.2 m respectively) in Vertisol soils, near Cordoba, Spain. For benchmark data, the gully topography was determined with a terrestrial laser scanner (Riegl LMS-Z420i, with a cited range accuracy of 10 mm). 191 photos were taken with a Canon EOS 450D with a prime (fixed) 28 mm lens over a period of ~10 minutes. In order to georeference the SfM-MVS model for comparison with the TLS data, 6 control targets were located around the gully and their locations determined by dGPS. Differences between the TLS and SfM-MVS surfaces are dominated by areas of data
NASA Astrophysics Data System (ADS)
Chan, Heang-Ping; Sahiner, Berkman; Wei, Jun; Hadjiiski, Lubomir M.; Zhou, Chuan; Helvie, Mark A.
2010-03-01
We are developing a computer-aided detection (CAD) system for clustered microcalcifications in digital breast tomosynthesis (DBT). In this preliminary study, we investigated the approach of detecting microcalcifications in the tomosynthesized volume. The DBT volume is first enhanced by 3D multi-scale filtering and analysis of the eigenvalues of Hessian matrices with a calcification response function and signal-to-noise ratio enhancement filtering. Potential signal sites are identified in the enhanced volume and local analysis is performed to further characterize each object. A 3D dynamic clustering procedure is designed to locate potential clusters using hierarchical criteria. We collected a pilot data set of two-view DBT mammograms of 39 breasts containing microcalcification clusters (17 malignant, 22 benign) with IRB approval. A total of 74 clusters were identified by an experienced radiologist in the 78 DBT views. Our prototype CAD system achieved view-based sensitivity of 90% and 80% at an average FP rate of 7.3 and 2.0 clusters per volume, respectively. At the same levels of case-based sensitivity, the FP rates were 3.6 and 1.3 clusters per volume, respectively. For the subset of malignant clusters, the view-based detection sensitivity was 94% and 82% at an average FP rate of 6.0 and 1.5 FP clusters per volume, respectively. At the same levels of case-based sensitivity, the FP rates were 1.2 and 0.9 clusters per volume, respectively. This study demonstrated that computerized microcalcification detection in 3D is a promising approach to the development of a CAD system for DBT. Study is underway to further improve the computer-vision methods and to optimize the processing parameters using a larger data set.
NASA Astrophysics Data System (ADS)
Lougovski, A.; Hofheinz, F.; Maus, J.; Schramm, G.; Will, E.; van den Hoff, J.
2014-02-01
The aim of this study is the evaluation of on-the-fly volume of intersection computation for system’s geometry modelling in 3D PET image reconstruction. For this purpose we propose a simple geometrical model in which the cubic image voxels on the given Cartesian grid are approximated with spheres and the rectangular tubes of response (ToRs) are approximated with cylinders. The model was integrated into a fully 3D list-mode PET reconstruction for performance evaluation. In our model the volume of intersection between a voxel and the ToR is only a function of the impact parameter (the distance between voxel centre to ToR axis) but is independent of the relative orientation of voxel and ToR. This substantially reduces the computational complexity of the system matrix calculation. Based on phantom measurements it was determined that adjusting the diameters of the spherical voxel size and the ToR in such a way that the actual voxel and ToR volumes are conserved leads to the best compromise between high spatial resolution, low noise, and suppression of Gibbs artefacts in the reconstructed images. Phantom as well as clinical datasets from two different PET systems (Siemens ECAT HR+ and Philips Ingenuity-TF PET/MR) were processed using the developed and the respective vendor-provided (line of intersection related) reconstruction algorithms. A comparison of the reconstructed images demonstrated very good performance of the new approach. The evaluation showed the respective vendor-provided reconstruction algorithms to possess 34-41% lower resolution compared to the developed one while exhibiting comparable noise levels. Contrary to explicit point spread function modelling our model has a simple straight-forward implementation and it should be easy to integrate into existing reconstruction software, making it competitive to other existing resolution recovery techniques.
3D Ion Temperature Reconstruction
NASA Astrophysics Data System (ADS)
Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi
2009-11-01
The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.
Advancement of 31P Magnetic Resonance Spectroscopy Using GRAPPA Reconstruction on a 3D Volume
NASA Astrophysics Data System (ADS)
Clevenger, Tony
The overall objective of this research is to improve currently available metabolic imaging techniques for clinical use in monitoring and predicting treatment response to radiation therapy in liver cancer. Liver metabolism correlates with inflammatory and neoplastic liver diseases, which alter the intracellular concentration of phosphorus- 31 (31P) metabolites [1]. It is assumed that such metabolic changes occur prior to physical changes of the tissue. Therefore, information on regional changes of 31P metabolites in the liver, obtained by Magnetic Resonance Spectroscopic Imaging (MRSI) [1,2], can help in diagnosis and follow-up of various liver diseases. Specifically, there appears to be an immediate need of this technology for both the assessment of tumor response in patients with Hepatocellular Carcinoma (HCC) treated with Stereotactic Body Radiation Therapy (SBRT) [3--5], as well as assessment of radiation toxicity, which can result in worsening liver dysfunction [6]. Pilot data from our lab has shown that 31P MRSI has the potential to identify treatment response five months sooner than conventional methods [7], and to assess the biological response of liver tissue to radiation 24 hours post radiation therapy [8]. While this data is very promising, commonly occurring drawbacks for 31P MRSI are patient discomfort due to long scan times and prone positioning within the scanner, as well as reduced data quality due to patient motion and respiration. To further advance the full potential of 31P MRSI as a clinical diagnostic tool in the management of liver cancer, this PhD research project had the following aims: I) Reduce the long acquisition time of 3D 31P MRS by formulating and imple- menting an appropriate GRAPPA undersampling scheme and reconstruction on a clinical MRI scanner II) Testing and quantitative validation of GRAPPA reconstruction on 3D 31P MRSI on developmental phantoms and healthy volunteers At completion, this work should considerably advance 31P MRSI
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Zarco-Tejada, Pablo; Laredo, Mario; Gómez, Jose Alfonso
2013-04-01
estimates of the main dimensions of the gully (length, slope profile and total volume) for both methods. This analysis proved useful to define the field of application for each technique, considering their accuracy, cost and processing requirements. References Castillo, C., R. Perez, M.R. James, J.N. Quinton, E.V. Taguas, J.A. Gómez. 2012. Comparing the Accuracy of Several Field Methods for Measuring Gully Erosion. Soil Science Society of America Journal 76: 1319-1332. James, M. and Robson, S. 2012. Straightforward reconstruction of 3d surfaces and topography with a camera: Accuracy and geoscience application. Journal of Geophysical Research, 117.
D'Alessandro, Brian; Dhawan, Atam P
2012-11-01
Subsurface information about skin lesions, such as the blood volume beneath the lesion, is important for the analysis of lesion severity towards early detection of skin cancer such as malignant melanoma. Depth information can be obtained from diffuse reflectance based multispectral transillumination images of the skin. An inverse volume reconstruction method is presented which uses a genetic algorithm optimization procedure with a novel population initialization routine and nudge operator based on the multispectral images to reconstruct the melanin and blood layer volume components. Forward model evaluation for fitness calculation is performed using a parallel processing voxel-based Monte Carlo simulation of light in skin. Reconstruction results for simulated lesions show excellent volume accuracy. Preliminary validation is also done using a set of 14 clinical lesions, categorized into lesion severity by an expert dermatologist. Using two features, the average blood layer thickness and the ratio of blood volume to total lesion volume, the lesions can be classified into mild and moderate/severe classes with 100% accuracy. The method therefore has excellent potential for detection and analysis of pre-malignant lesions. PMID:22829392
3D model reconstruction of underground goaf
NASA Astrophysics Data System (ADS)
Fang, Yuanmin; Zuo, Xiaoqing; Jin, Baoxuan
2005-10-01
Constructing 3D model of underground goaf, we can control the process of mining better and arrange mining work reasonably. However, the shape of goaf and the laneway among goafs are very irregular, which produce great difficulties in data-acquiring and 3D model reconstruction. In this paper, we research on the method of data-acquiring and 3D model construction of underground goaf, building topological relation among goafs. The main contents are as follows: a) The paper proposed an efficient encoding rule employed to structure the field measurement data. b) A 3D model construction method of goaf is put forward, which by means of combining several TIN (triangulated irregular network) pieces, and an efficient automatic processing algorithm of boundary of TIN is proposed. c) Topological relation of goaf models is established. TIN object is the basic modeling element of goaf 3D model, and the topological relation among goaf is created and maintained by building the topological relation among TIN objects. Based on this, various 3D spatial analysis functions can be performed including transect and volume calculation of goaf. A prototype is developed, which can realized the model and algorithm proposed in this paper.
NASA Technical Reports Server (NTRS)
2002-01-01
In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.
NASA Astrophysics Data System (ADS)
Park, Junhan; Lee, Changwoo; Baek, Jongduk
2015-03-01
In medical imaging systems, several factors (e.g., reconstruction algorithm, noise structures, target size, contrast, etc) affect the detection performance and need to be considered for object detection. In a cone beam CT system, FDK reconstruction produces different noise structures in axial and coronal slices, and thus we analyzed directional dependent detectability of objects using detection SNR of Channelized Hotelling observer. To calculate the detection SNR, difference-of-Gaussian channel model with 10 channels was implemented, and 20 sphere objects with different radius (i.e., 0.25 (mm) to 5 (mm) equally spaced by 0.25 (mm)), reconstructed by FDK algorithm, were used as object templates. Covariance matrix in axial and coronal direction was estimated from 3000 reconstructed noise volumes, and then the SNR ratio between axial and coronal direction was calculated. Corresponding 2D noise power spectrum was also calculated. The results show that as the object size increases, the SNR ratio decreases, especially lower than 1 when the object size is larger than 2.5 mm radius. The reason is because the axial (coronal) noise power is higher in high (low) frequency band, and therefore the detectability of a small (large) object is higher in coronal (axial) images. Our results indicate that it is more beneficial to use coronal slices in order to improve the detectability of a small object in a cone beam CT system.
Forensic 3D Scene Reconstruction
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.
1999-10-12
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
Forensic 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.
2000-05-01
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
NASA Astrophysics Data System (ADS)
Liang, Xian-hua; Sun, Wei-dong
2011-06-01
Inventory checking is one of the most significant parts for grain reserves, and plays a very important role on the macro-control of food and food security. Simple, fast and accurate method to obtain internal structure information and further to estimate the volume of the grain storage is needed. Here in our developed system, a special designed multi-site laser scanning system is used to acquire the range data clouds of the internal structure of the grain storage. However, due to the seriously uneven distribution of the range data, this data should firstly be preprocessed by an adaptive re-sampling method to reduce the data redundancy as well as noise. Then the range data is segmented and useful features, such as plane and cylinder information, are extracted. With these features a coarse registration between all of these single-site range data is done, and then an Iterative Closest Point (ICP) algorithm is carried out to achieve fine registration. Taking advantage of the structure of the grain storage being well defined and the types of them are limited, a fast automatic registration method based on the priori model is proposed to register the multi-sites range data more efficiently. Then after the integration of the multi-sites range data, the grain surface is finally reconstructed by a delaunay based algorithm and the grain volume is estimated by a numerical integration method. This proposed new method has been applied to two common types of grain storage, and experimental results shown this method is more effective and accurate, and it can also avoids the cumulative effect of errors when registering the overlapped area pair-wisely.
3D puzzle reconstruction for archeological fragments
NASA Astrophysics Data System (ADS)
Jampy, F.; Hostein, A.; Fauvet, E.; Laligant, O.; Truchetet, F.
2015-03-01
The reconstruction of broken artifacts is a common task in archeology domain; it can be supported now by 3D data acquisition device and computer processing. Many works have been dedicated in the past to reconstructing 2D puzzles but very few propose a true 3D approach. We present here a complete solution including a dedicated transportable 3D acquisition set-up and a virtual tool with a graphic interface allowing the archeologists to manipulate the fragments and to, interactively, reconstruct the puzzle. The whole lateral part is acquired by rotating the fragment around an axis chosen within a light sheet thanks to a step-motor synchronized with the camera frame clock. Another camera provides a top view of the fragment under scanning. A scanning accuracy of 100μm is attained. The iterative automatic processing algorithm is based on segmentation into facets of the lateral part of the fragments followed by a 3D matching providing the user with a ranked short list of possible assemblies. The device has been applied to the reconstruction of a set of 1200 fragments from broken tablets supporting a Latin inscription dating from the first century AD.
3D EIT image reconstruction with GREIT.
Grychtol, Bartłomiej; Müller, Beat; Adler, Andy
2016-06-01
Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184
The PRISM3D paleoenvironmental reconstruction
Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.
2010-01-01
The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may
New Reconstruction Accuracy Metric for 3D PIV
NASA Astrophysics Data System (ADS)
Bajpayee, Abhishek; Techet, Alexandra
2015-11-01
Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.
IFSAR processing for 3D target reconstruction
NASA Astrophysics Data System (ADS)
Austin, Christian D.; Moses, Randolph L.
2005-05-01
In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.
3D reconstruction of tensors and vectors
Defrise, Michel; Gullberg, Grant T.
2005-02-17
Here we have developed formulations for the reconstruction of 3D tensor fields from planar (Radon) and line-integral (X-ray) projections of 3D vector and tensor fields. Much of the motivation for this work is the potential application of MRI to perform diffusion tensor tomography. The goal is to develop a theory for the reconstruction of both Radon planar and X-ray or line-integral projections because of the flexibility of MRI to obtain both of these type of projections in 3D. The development presented here for the linear tensor tomography problem provides insight into the structure of the nonlinear MRI diffusion tensor inverse problem. A particular application of tensor imaging in MRI is the potential application of cardiac diffusion tensor tomography for determining in vivo cardiac fiber structure. One difficulty in the cardiac application is the motion of the heart. This presents a need for developing future theory for tensor tomography in a motion field. This means developing a better understanding of the MRI signal for diffusion processes in a deforming media. The techniques developed may allow the application of MRI tensor tomography for the study of structure of fiber tracts in the brain, atherosclerotic plaque, and spine in addition to fiber structure in the heart. However, the relations presented are also applicable to other fields in medical imaging such as diffraction tomography using ultrasound. The mathematics presented can also be extended to exponential Radon transform of tensor fields and to other geometric acquisitions such as cone beam tomography of tensor fields.
Faster, higher quality volume visualization for 3D medical imaging
NASA Astrophysics Data System (ADS)
Kalvin, Alan D.; Laine, Andrew F.; Song, Ting
2008-03-01
The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.
Adapting 3D Equilibrium Reconstruction to Reconstruct Weakly 3D H-mode Tokamaks
NASA Astrophysics Data System (ADS)
Cianciosa, M. R.; Hirshman, S. P.; Seal, S. K.; Unterberg, E. A.; Wilcox, R. S.; Wingen, A.; Hanson, J. D.
2015-11-01
The application of resonant magnetic perturbations for edge localized mode (ELM) mitigation breaks the toroidal symmetry of tokamaks. In these scenarios, the axisymmetric assumptions of the Grad-Shafranov equation no longer apply. By extension, equilibrium reconstruction tools, built around these axisymmetric assumptions, are insufficient to fully reconstruct a 3D perturbed equilibrium. 3D reconstruction tools typically work on systems where the 3D components of signals are a significant component of the input signals. In nominally axisymmetric systems, applied field perturbations can be on the order of 1% of the main field or less. To reconstruct these equilibria, the 3D component of signals must be isolated from the axisymmetric portions to provide the necessary information for reconstruction. This presentation will report on the adaptation to V3FIT for application on DIII-D H-mode discharges with applied resonant magnetic perturbations (RMPs). Newly implemented motional stark effect signals and modeling of electric field effects will also be discussed. Work supported under U.S. DOE Cooperative Agreement DE-AC05-00OR22725.
Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry
NASA Astrophysics Data System (ADS)
Wieneke, Bernhard; Neal, Douglas
2011-11-01
A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
NASA Astrophysics Data System (ADS)
Scheins, J. J.; Vahedipour, K.; Pietrzyk, U.; Shah, N. J.
2015-12-01
For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-01-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-03-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
3-D Volume Rendering of Sand Specimen
NASA Technical Reports Server (NTRS)
2004-01-01
Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture
3D Surface Reconstruction and Automatic Camera Calibration
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre
2004-01-01
Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.
Volume rendering for interactive 3D segmentation
NASA Astrophysics Data System (ADS)
Toennies, Klaus D.; Derz, Claus
1997-05-01
Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.
Clinical Experience With A Portable 3-D Reconstruction Program
NASA Astrophysics Data System (ADS)
Holshouser, Barbara A.; Christiansen, Edwin L.; Thompson, Joseph R.; Reynolds, R. Anthony; Goldwasser, Samuel M.
1988-06-01
Clinical experience with a computer program for reconstructing and visualizing three-dimensional (3-D) structures is reported. Applications to the study of soft-tissue and skeletal structures, such as the temporomandibular joint and craniofacial anatomy, using computed tomography (CT) data are described. Several features specific to the computer algorithm are demonstrated and evaluated. These include: (1) manipulation of density windows to selectively visualize bone or soft tissue structures; (2) the efficacy of gradient shading algorithms in revealing fine surface detail; and (3) the rapid generation of cut-away views revealing details of internal structures. Also demonstrated is the importance of high resolution data as input to the 3-D program. The implementation of the program (VoxelView-32) described here, is on a MASSCOMP computer running UNIX. Data were collected with General Electric or Siemens CT scanners and transferred to the MASSCOMP for off-line 3-D recon-struction, via magnetic tape or Ethernet. An interactive graphics facility on the MASSCOMP allows viewing of 2-D slices, subregioning, and selection of lower and upper density thresholds for segmentation. The software then enters a pre-processing phase during which a volume representation of the segmented object (soft tissue or bone) is automatically created. This is followed by a rendering phase during which multiple views of the segmented object are automatically generated. The pre-processing phase typically takes 4 to 8 minutes (although very large datasets may require as much as 30 minutes) and the rendering phase typically takes 1 to 2 minutes for each 3-D view. Volume representation and rendering techniques are used at all stages of the processing, and gradient shading is used for enhanced surface detail.
3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells
Luo, Tong; Chen, Huan; Kassab, Ghassan S.
2016-01-01
Aims The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function. PMID:26882342
Discussion of Source Reconstruction Models Using 3D MCG Data
NASA Astrophysics Data System (ADS)
Melis, Massimo De; Uchikawa, Yoshinori
In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.
The sinogram polygonizer for reconstructing 3D shapes.
Yamanaka, Daiki; Ohtake, Yutaka; Suzuki, Hiromasa
2013-11-01
This paper proposes a novel approach, the sinogram polygonizer, for directly reconstructing 3D shapes from sinograms (i.e., the primary output from X-ray computed tomography (CT) scanners consisting of projection image sequences of an object shown from different viewing angles). To obtain a polygon mesh approximating the surface of a scanned object, a grid-based isosurface polygonizer, such as Marching Cubes, has been conventionally applied to the CT volume reconstructed from a sinogram. In contrast, the proposed method treats CT values as a continuous function and directly extracts a triangle mesh based on tetrahedral mesh deformation. This deformation involves quadratic error metric minimization and optimal Delaunay triangulation for the generation of accurate, high-quality meshes. Thanks to the analytical gradient estimation of CT values, sharp features are well approximated, even though the generated mesh is very coarse. Moreover, this approach eliminates aliasing artifacts on triangle meshes. PMID:24029910
The Sinogram Polygonizer for Reconstructing 3D Shapes.
Yamanaka, Daiki; Ohtake, Yutaka; Suzuki, Hiromasa
2013-05-24
This paper proposes a novel approach, the sinogram polygonizer, for directly reconstructing 3D shapes from sinograms (i.e., the primary output from X-ray computed tomography (CT) scanners consisting of projection image sequences of an object shown from different viewing angles). To obtain a polygon mesh approximating the surface of a scanned object, a grid-based isosurface polygonizer, such as Marching Cubes, has been conventionally applied to the CT volume reconstructed from a sinogram. In contrast, the proposed method treats CT values as a continuous function and directly extracts a triangle mesh based on tetrahedral mesh deformation. This deformation involves quadratic error metric minimization and optimal Delaunay triangulation for the generation of accurate, high-quality meshes. Thanks to the analytical gradient estimation of CT values, sharp features are well approximated, even though the generated mesh is very coarse. Moreover, this approach eliminates aliasing artifacts on triangle meshes. PMID:23712999
3D scene reconstruction based on 3D laser point cloud combining UAV images
NASA Astrophysics Data System (ADS)
Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen
2016-03-01
It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.
3D Equilibrium Reconstructions in DIII-D
NASA Astrophysics Data System (ADS)
Lao, L. L.; Ferraro, N. W.; Strait, E. J.; Turnbull, A. D.; King, J. D.; Hirshman, H. P.; Lazarus, E. A.; Sontag, A. C.; Hanson, J.; Trevisan, G.
2013-10-01
Accurate and efficient 3D equilibrium reconstruction is needed in tokamaks for study of 3D magnetic field effects on experimentally reconstructed equilibrium and for analysis of MHD stability experiments with externally imposed magnetic perturbations. A large number of new magnetic probes have been recently installed in DIII-D to improve 3D equilibrium measurements and to facilitate 3D reconstructions. The V3FIT code has been in use in DIII-D to support 3D reconstruction and the new magnetic diagnostic design. V3FIT is based on the 3D equilibrium code VMEC that assumes nested magnetic surfaces. V3FIT uses a pseudo-Newton least-square algorithm to search for the solution vector. In parallel, the EFIT equilibrium reconstruction code is being extended to allow for 3D effects using a perturbation approach based on an expansion of the MHD equations. EFIT uses the cylindrical coordinate system and can include the magnetic island and stochastic effects. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria directly making use of plasma response to 3D perturbations from the GATO, MARS-F, or M3D-C1 MHD codes. DIII-D 3D reconstruction examples using EFIT and V3FIT and the new 3D magnetic data will be presented. Work supported in part by US DOE under DE-FC02-04ER54698, DE-FG02-95ER54309 and DE-AC05-06OR23100.
3D Building Reconstruction Using Dense Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.
2016-06-01
Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
Tomographic system for 3D temperature reconstruction
NASA Astrophysics Data System (ADS)
Antos, Martin; Malina, Radomir
2003-11-01
The novel laboratory system for the optical tomography is used to obtain three-dimensional temperature field around a heated element. The Mach-Zehnder holographic interferometers with diffusive illumination of the phase object provide the possibility to scan of multidirectional holographic interferograms in the range of viewing angles from 0 deg to 108 deg. These interferograms form the input data for the computer tomography of the 3D distribution of the refractive index variation, which characterizes the physical state of the studied medium. The configuration of the system allows automatic projection scanning of the studied phase object. The computer calculates the wavefront deformation for each projection, making use of different methods of Fourier-transform and phase-sampling evaluations. The experimental set-up together with experimental results is presented.
3D scene reconstruction from multi-aperture images
NASA Astrophysics Data System (ADS)
Mao, Miao; Qin, Kaihuai
2014-04-01
With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.
Accuracy of 3d Reconstruction in AN Illumination Dome
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay; Toschi, Isabella; Nocerino, Erica; Hess, Mona; Remondino, Fabio; Robson, Stuart
2016-06-01
The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.
Improving 3D Genome Reconstructions Using Orthologous and Functional Constraints
Diament, Alon; Tuller, Tamir
2015-01-01
The study of the 3D architecture of chromosomes has been advancing rapidly in recent years. While a number of methods for 3D reconstruction of genomic models based on Hi-C data were proposed, most of the analyses in the field have been performed on different 3D representation forms (such as graphs). Here, we reproduce most of the previous results on the 3D genomic organization of the eukaryote Saccharomyces cerevisiae using analysis of 3D reconstructions. We show that many of these results can be reproduced in sparse reconstructions, generated from a small fraction of the experimental data (5% of the data), and study the properties of such models. Finally, we propose for the first time a novel approach for improving the accuracy of 3D reconstructions by introducing additional predicted physical interactions to the model, based on orthologous interactions in an evolutionary-related organism and based on predicted functional interactions between genes. We demonstrate that this approach indeed leads to the reconstruction of improved models. PMID:26000633
Tomographic compressive holographic reconstruction of 3D objects
NASA Astrophysics Data System (ADS)
Nehmetallah, G.; Williams, L.; Banerjee, P. P.
2012-10-01
Compressive holography with multiple projection tomography is applied to solve the inverse ill-posed problem of reconstruction of 3D objects with high axial accuracy. To visualize the 3D shape, we propose Digital Tomographic Compressive Holography (DiTCH), where projections from more than one direction as in tomographic imaging systems can be employed, so that a 3D shape with better axial resolution can be reconstructed. We compare DiTCH with single-beam holographic tomography (SHOT) which is based on Fresnel back-propagation. A brief theory of DiTCH is presented, and experimental results of 3D shape reconstruction of objects using DITCH and SHOT are compared.
3-D flame temperature field reconstruction with multiobjective neural network
NASA Astrophysics Data System (ADS)
Wan, Xiong; Gao, Yiqing; Wang, Yuanmei
2003-02-01
A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
3D Reconstruction For The Detection Of Cranial Anomalies
NASA Astrophysics Data System (ADS)
Kettner, B.; Shalev, S.; Lavelle, C.
1986-01-01
There is a growing interest in the use of three-dimensional (3D) cranial reconstruction from CT scans for surgical planning. A low-cost imaging system has been developed, which provides pseudo-3D images which may be manipulated to reveal the craniofacial skeleton as a whole or any particular component region. The contrast between congenital (hydrocephalic), normocephalic and acquired (carcinoma of the maxillary sinus) anomalous cranial forms demonstrates the potential of this system.
Bound constrained bundle adjustment for reliable 3D reconstruction.
Gong, Yuanzheng; Meng, De; Seibel, Eric J
2015-04-20
Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment. PMID:25969115
Bound constrained bundle adjustment for reliable 3D reconstruction
Gong, Yuanzheng; Meng, De; Seibel, Eric J.
2015-01-01
Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment. PMID:25969115
3D scanning modeling method application in ancient city reconstruction
NASA Astrophysics Data System (ADS)
Ren, Pu; Zhou, Mingquan; Du, Guoguang; Shui, Wuyang; Zhou, Pengbo
2015-07-01
With the development of optical engineering technology, the precision of 3D scanning equipment becomes higher, and its role in 3D modeling is getting more distinctive. This paper proposed a 3D scanning modeling method that has been successfully applied in Chinese ancient city reconstruction. On one hand, for the existing architectures, an improved algorithm based on multiple scanning is adopted. Firstly, two pieces of scanning data were rough rigid registered using spherical displacers and vertex clustering method. Secondly, a global weighted ICP (iterative closest points) method is used to achieve a fine rigid registration. On the other hand, for the buildings which have already disappeared, an exemplar-driven algorithm for rapid modeling was proposed. Based on the 3D scanning technology and the historical data, a system approach was proposed for 3D modeling and virtual display of ancient city.
MRI Volume Fusion Based on 3D Shearlet Decompositions.
Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong
2014-01-01
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880
MRI Volume Fusion Based on 3D Shearlet Decompositions
Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong
2014-01-01
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880
3D Reconstruction of Human Motion from Monocular Image Sequences.
Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo
2016-08-01
This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439
Automated 3D reconstruction of interiors with multiple scan views
NASA Astrophysics Data System (ADS)
Sequeira, Vitor; Ng, Kia C.; Wolfart, Erik; Goncalves, Joao G. M.; Hogg, David C.
1998-12-01
This paper presents two integrated solutions for realistic 3D model acquisition and reconstruction; an early prototype, in the form of a push trolley, and a later prototype in the form of an autonomous robot. The systems encompass all hardware and software required, from laser and video data acquisition, processing and output of texture-mapped 3D models in VRML format, to batteries for power supply and wireless network communications. The autonomous version is also equipped with a mobile platform and other sensors for the purpose of automatic navigation. The applications for such a system range from real estate and tourism (e.g., showing a 3D computer model of a property to a potential buyer or tenant) or as tool for content creation (e.g., creating 3D models of heritage buildings or producing broadcast quality virtual studios). The system can also be used in industrial environments as a reverse engineering tool to update the design of a plant, or as a 3D photo-archive for insurance purposes. The system is Internet compatible: the photo-realistic models can be accessed via the Internet and manipulated interactively in 3D using a common Web browser with a VRML plug-in. Further information and example reconstructed models are available on- line via the RESOLV web-page at http://www.scs.leeds.ac.uk/resolv/.
3D video sequence reconstruction algorithms implemented on a DSP
NASA Astrophysics Data System (ADS)
Ponomaryov, V. I.; Ramos-Diaz, E.
2011-03-01
A novel approach for 3D image and video reconstruction is proposed and implemented. This is based on the wavelet atomic functions (WAF) that have demonstrated better approximation properties in different processing problems in comparison with classical wavelets. Disparity maps using WAF are formed, and then they are employed in order to present 3D visualization using color anaglyphs. Additionally, the compression via Pth law is performed to improve the disparity map quality. Other approaches such as optical flow and stereo matching algorithm are also implemented as the comparative approaches. Numerous simulation results have justified the efficiency of the novel framework. The implementation of the proposed algorithm on the Texas Instruments DSP TMS320DM642 permits to demonstrate possible real time processing mode during 3D video reconstruction for images and video sequences.
Reconstruction and 3D visualisation based on objective real 3D based documentation.
Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A
2012-09-01
Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.
3-D reconstruction of neurons from multichannel confocal laser scanning image series.
Wouterlood, Floris G
2014-01-01
A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. PMID:24723320
On detailed 3D reconstruction of large indoor environments
NASA Astrophysics Data System (ADS)
Bondarev, Egor
2015-03-01
In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.
A new algorithm for 3D reconstruction from support functions.
Gardner, Richard J; Kiderlen, Markus
2009-03-01
We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab, and it works for both 2D and 3D reconstructions (in fact, in principle, in any dimension). Reconstructions may be obtained without any pre- or post-processing steps and with no restriction on the sets of measurement directions except their number, a limitation dictated only by computing time. An algorithm due to Prince and Willsky was implemented earlier for 2D reconstructions, and we compare the performance of their algorithm and ours. But our algorithm is the first that works for 3D reconstructions with the freedom stated in the previous paragraph. Moreover, under mild conditions, theory guarantees that outputs of the new algorithm will converge to the input shape as the number of measurements increases. In addition we offer a linear program version of the new algorithm that is much faster and better, or at least comparable, in performance at low levels of noise and reasonably small numbers of measurements. Another modification of the algorithm, suitable for use in a "focus of attention" scheme, is also described. PMID:19147881
3D reconstruction methods of coronal structures by radio observations
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-11-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
3D reconstruction methods of coronal structures by radio observations
NASA Technical Reports Server (NTRS)
Aschwanden, Markus J.; Bastian, T. S.; White, Stephen M.
1992-01-01
The ability to carry out the three dimensional (3D) reconstruction of structures in the solar corona would represent a major advance in the study of the physical properties in active regions and in flares. Methods which allow a geometric reconstruction of quasistationary coronal structures (for example active region loops) or dynamic structures (for example flaring loops) are described: stereoscopy of multi-day imaging observations by the VLA (Very Large Array); tomography of optically thin emission (in radio or soft x-rays); multifrequency band imaging by the VLA; and tracing of magnetic field lines by propagating electron beams.
Reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa
2013-08-01
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3
Scattering robust 3D reconstruction via polarized transient imaging.
Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai
2016-09-01
Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944
Optical Sensors and Methods for Underwater 3D Reconstruction.
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
Structured Light-Based 3D Reconstruction System for Plants
Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima
2015-01-01
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701
Structured Light-Based 3D Reconstruction System for Plants.
Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima
2015-01-01
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701
3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.
Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun
2016-08-01
Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. PMID:27200484
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
New method for 3D reconstruction in digital tomosynthesis
NASA Astrophysics Data System (ADS)
Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2002-05-01
Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.
3D reconstruction on CBCT in the cystic pathology of the jaws
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
The paper presents the image acquisition of Cone Beam Computer Tomography scans of human facial bones and their processing in order to obtain a 3D reconstruction model of the skull. The reconstructed model provides useful data to the physician in situations of maxillary cystic pathology but more important is the data about the relationship of the maxillary cyst with the surrounding anatomical elements. Using the B-splines a 3D volume model of the human facial bones can be achieved. This model can be exported in any CAD system, resulting a virtual model witch can be used in FEM analysis.
Ribes, Delphine; Parafita, Julia; Charrier, Rémi; Magara, Fulvio; Magistretti, Pierre J; Thiran, Jean-Philippe
2010-01-01
In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool. PMID:21124830
Using of Bezier Interpolation in 3D Reconstruction of Human Femur Bone
NASA Astrophysics Data System (ADS)
Toth-Tascau, Mirela; Pater, Flavius; Stoia, Dan Ioan; Menyhardt, Karoly; Rosu, Serban; Rusu, Lucian; Vigaru, Cosmina
2011-09-01
The paper is focused on image acquisition and processing of CT scans of a human femur bone in order to obtain 3D reconstructions of the human femur. The objective of the presented study was to obtain 3D realistic model of the human femur bone. The reconstructed model provides useful data to the physician but more important are the data and 3D models that can be used for virtual testing of femoral implants and endoprosthesis. Using the B-spline patch a 3D volume model of the human femur bone can be achieved. This model can be easy imported in any CAD system, resulting a virtual femur model witch can be used in FEM analysis.
Dose fractionation theorem in 3-D reconstruction (tomography)
Glaeser, R.M.
1997-02-01
It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resulted in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.
3D reconstruction based on CT image and its application
NASA Astrophysics Data System (ADS)
Zhang, Jianxun; Zhang, Mingmin
2004-03-01
Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.
3D temperature field reconstruction using ultrasound sensing system
NASA Astrophysics Data System (ADS)
Liu, Yuqian; Ma, Tong; Cao, Chengyu; Wang, Xingwei
2016-04-01
3D temperature field reconstruction is of practical interest to the power, transportation and aviation industries and it also opens up opportunities for real time control or optimization of high temperature fluid or combustion process. In our paper, a new distributed optical fiber sensing system consisting of a series of elements will be used to generate and receive acoustic signals. This system is the first active temperature field sensing system that features the advantages of the optical fiber sensors (distributed sensing capability) and the acoustic sensors (non-contact measurement). Signals along multiple paths will be measured simultaneously enabled by a code division multiple access (CDMA) technique. Then a proposed Gaussian Radial Basis Functions (GRBF)-based approach can approximate the temperature field as a finite summation of space-dependent basis functions and time-dependent coefficients. The travel time of the acoustic signals depends on the temperature of the media. On this basis, the Gaussian functions are integrated along a number of paths which are determined by the number and distribution of sensors. The inversion problem to estimate the unknown parameters of the Gaussian functions can be solved with the measured times-of-flight (ToF) of acoustic waves and the length of propagation paths using the recursive least square method (RLS). The simulation results show an approximation error less than 2% in 2D and 5% in 3D respectively. It demonstrates the availability and efficiency of our proposed 3D temperature field reconstruction mechanism.
One-step reconstruction of assembled 3D holographic scenes
NASA Astrophysics Data System (ADS)
Velez Zea, Alejandro; Barrera-Ramírez, John Fredy; Torroba, Roberto
2015-12-01
We present a new experimental approach for reconstructing in one step 3D scenes otherwise not feasible in a single snapshot from standard off-axis digital hologram architecture, due to a lack of illuminating resources or a limited setup size. Consequently, whenever a scene could not be wholly illuminated or the size of the scene surpasses the available setup disposition, this protocol can be implemented to solve these issues. We need neither to alter the original setup in every step nor to cover the whole scene by the illuminating source, thus saving resources. With this technique we multiplex the processed holograms of actual diffuse objects composing a scene using a two-beam off-axis holographic setup in a Fresnel approach. By registering individually the holograms of several objects and applying a spatial filtering technique, the filtered Fresnel holograms can then be added to produce a compound hologram. The simultaneous reconstruction of all objects is performed in one step using the same recovering procedure employed for single holograms. Using this technique, we were able to reconstruct, for the first time to our knowledge, a scene by multiplexing off-axis holograms of the 3D objects without cross talk. This technique is important for quantitative visualization of optically packaged multiple images and is useful for a wide range of applications. We present experimental results to support the method.
Real-Time Camera Guidance for 3d Scene Reconstruction
NASA Astrophysics Data System (ADS)
Schindler, F.; Förstner, W.
2012-07-01
We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.
3D segmentation and reconstruction of endobronchial ultrasound
NASA Astrophysics Data System (ADS)
Zang, Xiaonan; Breslav, Mikhail; Higgins, William E.
2013-03-01
State-of-the-art practice for lung-cancer staging bronchoscopy often draws upon a combination of endobronchial ultrasound (EBUS) and multidetector computed-tomography (MDCT) imaging. While EBUS offers real-time in vivo imaging of suspicious lesions and lymph nodes, its low signal-to-noise ratio and tendency to exhibit missing region-of-interest (ROI) boundaries complicate diagnostic tasks. Furthermore, past efforts did not incorporate automated analysis of EBUS images and a subsequent fusion of the EBUS and MDCT data. To address these issues, we propose near real-time automated methods for three-dimensional (3D) EBUS segmentation and reconstruction that generate a 3D ROI model along with ROI measurements. Results derived from phantom data and lung-cancer patients show the promise of the methods. In addition, we present a preliminary image-guided intervention (IGI) system example, whereby EBUS imagery is registered to a patient's MDCT chest scan.
Fast and efficient particle reconstruction on a 3D grid using sparsity
NASA Astrophysics Data System (ADS)
Cornic, P.; Champagnat, F.; Cheminet, A.; Leclaire, B.; Le Besnerais, G.
2015-03-01
We propose an approach for efficient localization and intensity reconstruction of particles on a 3D grid based on sparsity principles. The computational complexity of the method is limited by using the particle volume reconstruction paradigm (Champagnat et al. in Meas Sci Technol 25, 2014) and a reduction in the problem dimension. Tests on synthetic and experimental data show that the proposed method leads to more efficient detections and to reconstructions of higher quality than classical tomoPIV approaches on a large range of seeding densities, up to ppp ≈ 0.12.
3D reconstruction of tomographic images applied to largely spaced slices.
Traina, A J; Prado, A H; Bueno, J M
1997-12-01
This paper presents a full reconstruction process of magnetic resonance images. The first step is to bring the acquired data from the frequency domain, using a Fast Fourier Transform algorithm. A Tomographic Image Interpolation is then used to transform a sequence of tomographic slices in an isotropic volume data set, a process also called 3D Reconstruction. This work describes an automatic method whose interpolation stage is based on a previous matching stage using Delaunay Triangulation. The reconstruction approach uses an extrapolation procedure that permits appropriate treatment of the boundaries of the object under analysis. PMID:9555624
Facial-paralysis diagnostic system based on 3D reconstruction
NASA Astrophysics Data System (ADS)
Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee
2015-05-01
The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.
3D-reconstruction of blood vessels by ultramicroscopy
Jährling, Nina; Becker, Klaus
2009-01-01
As recently shown, ultramicroscopy (UM) allows 3D-visualization of even large microscopic structures with µm resolution. Thus, it can be applied to anatomical studies of numerous biological and medical specimens. We reconstructed the three-dimensional architecture of tomato-lectin (Lycopersicon esculentum) stained vascular networks by UM in whole mouse organs. The topology of filigree branches of the microvasculature was visualized. Since tumors require an extensive growth of blood vessels to survive, this novel approach may open up new vistas in neurobiology and histology, particularly in cancer research. PMID:20539742
Online reconstruction of 3D magnetic particle imaging data
NASA Astrophysics Data System (ADS)
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Online reconstruction of 3D magnetic particle imaging data.
Knopp, T; Hofmann, M
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668
Computerized 3-D reconstruction of two "double teeth".
Lyroudia, K; Mikrogeorgis, G; Nikopoulos, N; Samakovitis, G; Molyvdas, I; Pitas, I
1997-10-01
"Double teeth" is a root malformation in the dentition and the purpose of this study was to reconstruct three-dimensionally the external and internal morphology of two "double teeth". The first set of "double teeth" was formed by the conjunction of a mandibular molar and a premolar, and the second by a conjunction of a maxillary molar and a supernumerary tooth. The process of 3-D reconstruction included serial cross-sectioning, photographs of the sections, digitization of the photographs, extraction of the boundaries of interest for each section, surface representation using triangulation and, finally, surface rendering using photorealistic effects. The resulting three-dimensional representations of the two teeth helped us visualize their external and internal anatomy. The results showed: a) in the first case, fusion of the radical and coronal dentin, as well as fusion of the pulp chambers; and b) in the second case, fusion only of the radical dentin and the pulp chambers. PMID:9550051
Digital Reconstruction of 3D Polydisperse Dry Foam
NASA Astrophysics Data System (ADS)
Chieco, A.; Feitosa, K.; Roth, A. E.; Korda, P. T.; Durian, D. J.
2012-02-01
Dry foam is a disordered packing of bubbles that distort into familiar polyhedral shapes. We have implemented a method that uses optical axial tomography to reconstruct the internal structure of a dry foam in three dimensions. The technique consists of taking a series of photographs of the dry foam against a uniformly illuminated background at successive angles. By summing the projections we create images of the foam cross section. Image analysis of the cross sections allows us to locate Plateau borders and vertices. The vertices are then connected according to Plateau's rules to reconstruct the internal structure of the foam. Using this technique we are able to visualize a large number of bubbles of real 3D foams and obtain statistics of faces and edges.
Fast vision-based catheter 3D reconstruction.
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D
2016-07-21
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms. PMID:27352011
Digital 3D facial reconstruction of George Washington
NASA Astrophysics Data System (ADS)
Razdan, Anshuman; Schwartz, Jeff; Tocheri, Mathew; Hansford, Dianne
2006-02-01
PRISM is a focal point of interdisciplinary research in geometric modeling, computer graphics and visualization at Arizona State University. Many projects in the last ten years have involved laser scanning, geometric modeling and feature extraction from such data as archaeological vessels, bones, human faces, etc. This paper gives a brief overview of a recently completed project on the 3D reconstruction of George Washington (GW). The project brought together forensic anthropologists, digital artists and computer scientists in the 3D digital reconstruction of GW at 57, 45 and 19 including detailed heads and bodies. Although many other scanning projects such as the Michelangelo project have successfully captured fine details via laser scanning, our project took it a step further, i.e. to predict what that individual (in the sculpture) might have looked like both in later and earlier years, specifically the process to account for reverse aging. Our base data was GWs face mask at Morgan Library and Hudons bust of GW at Mount Vernon, both done when GW was 53. Additionally, we scanned the statue at the Capitol in Richmond, VA; various dentures, and other items. Other measurements came from clothing and even portraits of GW. The digital GWs were then milled in high density foam for a studio to complete the work. These will be unveiled at the opening of the new education center at Mt Vernon in fall 2006.
3D Reconstruction of virtual colon structures from colonoscopy images.
Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C
2014-01-01
This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230
Fast vision-based catheter 3D reconstruction
NASA Astrophysics Data System (ADS)
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.
2016-07-01
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.
3D Reconstruction of Irregular Buildings and Buddha Statues
NASA Astrophysics Data System (ADS)
Zhang, K.; Li, M.-j.
2014-04-01
Three-dimensional laser scanning could acquire object's surface data quickly and accurately. However, the post-processing of point cloud is not perfect and could be improved. Based on the study of 3D laser scanning technology, this paper describes the details of solutions to modelling irregular ancient buildings and Buddha statues in Jinshan Temple, which aiming at data acquisition, modelling and texture mapping, etc. In order to modelling irregular ancient buildings effectively, the structure of each building is extracted manually by point cloud and the textures are mapped by the software of 3ds Max. The methods clearly combine 3D laser scanning technology with traditional modelling methods, and greatly improves the efficiency and accuracy of the ancient buildings restored. On the other hand, the main idea of modelling statues is regarded as modelling objects in reverse engineering. The digital model of statues obtained is not just vivid, but also accurate in the field of surveying and mapping. On this basis, a 3D scene of Jinshan Temple is reconstructed, which proves the validity of the solutions.
Fast fully 3-D image reconstruction in PET using planograms.
Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W
2004-04-01
We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067
Live ultrasound volume reconstruction using scout scanning
NASA Astrophysics Data System (ADS)
Meyer, Amelie; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor
2015-03-01
Ultrasound-guided interventions often necessitate scanning of deep-seated anatomical structures that may be hard to visualize. Visualization can be improved using reconstructed 3D ultrasound volumes. High-resolution 3D reconstruction of a large area during clinical interventions is challenging if the region of interest is unknown. We propose a two-stage scanning method allowing the user to perform quick low-resolution scouting followed by high-resolution live volume reconstruction. Scout scanning is accomplished by stacking 2D tracked ultrasound images into a low-resolution volume. Then, within a region of interest defined in the scout scan, live volume reconstruction can be performed by continuous scanning until sufficient image density is achieved. We implemented the workflow as a module of the open-source 3D Slicer application, within the SlicerIGT extension and building on the PLUS toolkit. Scout scanning is performed in a few seconds using 3 mm spacing to allow region of interest definition. Live reconstruction parameters are set to provide good image quality (0.5 mm spacing, hole filling enabled) and feedback is given during live scanning by regularly updated display of the reconstructed volume. Use of scout scanning may allow the physician to identify anatomical structures. Subsequent live volume reconstruction in a region of interest may assist in procedures such as targeting needle interventions or estimating brain shift during surgery.
Hansis, Eberhard; Schäfer, Dirk; Dössel, Olaf; Grass, Michael
2008-11-01
A 3-D reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular disease, compared to 2-D X-ray angiograms. Besides improved roadmapping, quantitative vessel analysis is possible. Due to the heart's motion, rotational coronary angiography typically provides only 5-10 projections for the reconstruction of each cardiac phase, which leads to a strongly undersampled reconstruction problem. Such an ill-posed problem can be approached with regularized iterative methods. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, the minimization of the mbiL(1) norm of the reconstructed image, favoring spatially sparse images, is a suitable regularization. Additional problems are overlaid background structures and projection truncation, which can be alleviated by background reduction using a morphological top-hat filter. This paper quantitatively evaluates image reconstruction based on these ideas on software phantom data, in terms of reconstructed absorption coefficients and vessel radii. Results for different algorithms and different input data sets are compared. First results for electrocardiogram-gated reconstruction from clinical catheter-based rotational X-ray coronary angiography are presented. Excellent 3-D image quality can be achieved. PMID:18955171
3D reconstruction of complex geological bodies: Examples from the Alps
NASA Astrophysics Data System (ADS)
Zanchi, Andrea; Francesca, Salvi; Stefano, Zanchetta; Simone, Sterlacchini; Graziano, Guerra
2009-01-01
Cartographic geological and structural data collected in the field and managed by Geographic Information Systems (GIS) technology can be used for 3D reconstruction of complex geological bodies. Using a link between GIS tools and gOcad, stratigraphic and tectonic surfaces can be reconstructed taking into account any geometrical constraint derived from field observations. Complex surfaces can be reconstructed using large data sets analysed by suitable geometrical techniques. Three main typologies of geometric features and related attributes are exported from a GIS-geodatabase: (1) topographic data as points from a digital elevation model; (2) stratigraphic and tectonic boundaries, and linear features as 2D polylines; (3) structural data as points. After having imported the available information into gOcad, the following steps should be performed: (1) construction of the topographic surface by interpolation of points; (2) 3D mapping of the linear geological boundaries and linear features by vertical projection on the reconstructed topographic surface; (3) definition of geometrical constraints from planar and linear outcrop data; (4) construction of a network of cross-sections based on field observations and geometrical constraints; (5) creation of 3D surfaces, closed volumes and grids from the constructed objects. Three examples of the reconstruction of complex geological bodies from the Italian Alps are presented here. The methodology demonstrates that although only outcrop data were available, 3D modelling has allows the checking of the geometrical consistency of the interpretative 2D sections and of the field geology, through a 3D visualisation of geometrical models. Application of a 3D geometrical model to the case studies can be very useful in geomechanical modelling for slope-stability or resource evaluation.
Colored 3D surface reconstruction using Kinect sensor
NASA Astrophysics Data System (ADS)
Guo, Lian-peng; Chen, Xiang-ning; Chen, Ying; Liu, Bin
2015-03-01
A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.
Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data
NASA Astrophysics Data System (ADS)
Huai, J.; Zhang, Y.; Yilmaz, A.
2015-08-01
Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.
Using flow information to support 3D vessel reconstruction from rotational angiography
Waechter, Irina; Bredno, Joerg; Weese, Juergen; Barratt, Dean C.; Hawkes, David J.
2008-07-15
For the assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) morphologic and hemodynamic information about the vessel system. Rotational angiography is routinely used to image the 3D vascular geometry and we have shown previously that rotational subtraction angiography has the potential to also give quantitative information about blood flow. Flow information can be determined when the angiographic sequence shows inflow and possibly outflow of contrast agent. However, a standard volume reconstruction assumes that the vessel tree is uniformly filled with contrast agent during the whole acquisition. If this is not the case, the reconstruction exhibits artifacts. Here, we show how flow information can be used to support the reconstruction of the 3D vessel centerline and radii in this case. Our method uses the fast marching algorithm to determine the order in which voxels are analyzed. For every voxel, the rotational time intensity curve (R-TIC) is determined from the image intensities at the projection points of the current voxel. Next, the bolus arrival time of the contrast agent at the voxel is estimated from the R-TIC. Then, a measure of the intensity and duration of the enhancement is determined, from which a speed value is calculated that steers the propagation of the fast marching algorithm. The results of the fast marching algorithm are used to determine the 3D centerline by backtracking. The 3D radius is reconstructed from 2D radius estimates on the projection images. The proposed method was tested on computer simulated rotational angiography sequences with systematically varied x-ray acquisition, blood flow, and contrast agent injection parameters and on datasets from an experimental setup using an anthropomorphic cerebrovascular phantom. For the computer simulation, the mean absolute error of the 3D centerline and 3D radius estimation was 0.42 and 0.25 mm, respectively. For the experimental datasets, the mean absolute
3D reconstruction of rotational video microscope based on patches
NASA Astrophysics Data System (ADS)
Ma, Shijie; Qu, Yufu
2015-11-01
Due to the small field of view and shallow depth of field, the microscope could only capture 2D images of the object. In order to observe the three-dimensional structure of the micro object, a microscopy images reconstruction algorithm based on an improved patch-based multi-view stereo (PMVS) algorithm is proposed. The new algorithm improves PMVS from two aspects: first, increasing the propagation directions, second, on the basis of the expansion, different expansion radius and times are set by the angle between the normal vector of the seed patch and the direction vector of the line passing through the seed patch center and the camera center. Compared with PMVS, the number of 3D points made by the new algorithm is three times as much as PMVS. And the holes in the vertical side are also eliminated.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
The new CORIMP CME catalog & 3D reconstructions
NASA Astrophysics Data System (ADS)
Byrne, Jason; Morgan, Huw; Gallagher, Peter; Habbal, Shadia; Davies, Jackie
2015-04-01
A new coronal mass ejection catalog has been built from a unique set of coronal image processing techniques, called CORIMP, that overcomes many of the limitations of current catalogs in operation. An online database has been produced for the SOHO/LASCO data and event detections therein; providing information on CME onset time, position angle, angular width, speed, acceleration, and mass, along with kinematic plots and observation movies. The high-fidelity and robustness of these methods and derived CME structure and kinematics will lead to an improved understanding of the dynamics of CMEs, and a realtime version of the algorithm has been implemented to provide CME detection alerts to the interested space weather community. Furthermore, STEREO data has been providing the ability to perform 3D reconstructions of CMEs that are observed in multipoint observations. This allows a determination of the 3D kinematics and morphologies of CMEs characterised in STEREO data via the 'elliptical tie-pointing' technique. The associated observations of SOHO, SDO and PROBA2 (and intended use of K-Cor) provide additional measurements and constraints on the CME analyses in order to improve their accuracy.
3D imaging reconstruction and impacted third molars: case reports
Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea
2012-01-01
Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934
3D volume visualization in remote radiation treatment planning
NASA Astrophysics Data System (ADS)
Yun, David Y.; Garcia, Hong-Mei C.; Mun, Seong K.; Rogers, James E.; Tohme, Walid G.; Carlson, Wayne E.; May, Stephen; Yagel, Roni
1996-03-01
This paper reports a novel applications of 3D visualization in an ARPA-funded remote radiation treatment planning (RTP) experiment, utilizing supercomputer 3D volumetric modeling power and NASA ACTS (Advanced Communication Technology Satellite) communication bandwidths at the Ka-band range. The objective of radiation treatment is to deliver a tumorcidal dose of radiation to a tumor volume while minimizing doses to surrounding normal tissues. High performance graphics computers are required to allow physicians to view a 3D anatomy, specify proposed radiation beams, and evaluate the dose distribution around the tumor. Supercomputing power is needed to compute and even optimize dose distribution according to pre-specified requirements. High speed communications offer possibilities for sharing scarce and expensive computing resources (e.g., hardware, software, personnel, etc.) as well as medical expertise for 3D treatment planning among hospitals. This paper provides initial technical insights into the feasibility of such resource sharing. The overall deployment of the RTP experiment, visualization procedures, and parallel volume rendering in support of remote interactive 3D volume visualization will be described.
Gene Electrotransfer in 3D Reconstructed Human Dermal Tissue.
Madi, Moinecha; Rols, Marie-Pierre; Gibot, Laure
2016-01-01
Gene electrotransfer into the skin is of particular interest for the development of medical applications including DNA vaccination, cancer treatment, wound healing or treatment of local skin disorders. However, such clinical applications are currently limited due to poor understanding of the mechanisms governing DNA electrotransfer within human tissue. Nowadays, most studies are carried out in rodent models but rodent skin varies from human skin in terms of cell composition and architecture. We used a tissue-engineering approach to study gene electrotransfer mechanisms in a human tissue context. Primary human dermal fibroblasts were cultured according to the self-assembly method to produce 3D reconstructed human dermal tissue. In this study, we showed that cells of the reconstructed cutaneous tissue were efficiently electropermeabilized by applying millisecond electric pulses, without affecting their viability. A reporter gene was successfully electrotransferred into this human tissue and gene expression was detected for up to 48h. Interestingly, the transfected cells were solely located on the upper surface of the tissue, where they were in close contact with plasmid DNA solution. Furthermore, we report evidences that electrotransfection success depends on plasmid mobility within tissue- rich in collagens, but not on cell proliferation status. In conclusion, in addition to proposing a reliable alternative to animal experiments, tissue engineering produces valid biological tool for the in vitro study of gene electrotransfer mechanisms in human tissue. PMID:27029947
Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces
NASA Astrophysics Data System (ADS)
Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf
2016-06-01
The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.
Application of 3D photo-reconstruction in soil erosion studies
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Pérez, Rafael; Gómez, Jose Alfonso
2014-05-01
3D photo-reconstruction (3D-PR) has been applied successfully to obtain elevation models using uncalibrated and nonmetric cameras for a range of geoscience applications (e.g. James and Robson, 2012), including gully erosion assessment (Castillo et al., 2012). However, its application in soil erosion studies is currently at the outset. The aim of this work is to compare 3D-PR with conventional techniques that have been employed traditionally for different purposes in soil erosion studies. In this preliminary work, we tested three applications that involve volume calculations: estimation of soil bulk density (BD), quantification of soil erosion at road banks (RB) and sedimentation rates behind check dams (CD). For each analysis, a PR field survey was carried out simultaneously with a conventional method (volume of water was used for BD, and total station surveys for RB and CD). For the 3D-PR technique, the accuracy as a function of the number of pictures taken was evaluated. In this study we explore the difference in the volume estimates between 3D-PR and conventional techniques as well as the time requirements for each method in order to compare their performance and optimal field of application.
3D reconstruction of carbon nanotube networks from neutron scattering experiments
NASA Astrophysics Data System (ADS)
Mahdavi, Mostafa; Baniassadi, Majid; Baghani, Mostafa; Dadmun, Mark; Tehrani, Mehran
2015-09-01
Structure reconstruction from statistical descriptors, such as scattering data obtained using x-rays or neutrons, is essential in understanding various properties of nanocomposites. Scattering based reconstruction can provide a realistic model, over various length scales, that can be used for numerical simulations. In this study, 3D reconstruction of a highly loaded carbon nanotube (CNT)-conducting polymer system based on small and ultra-small angle neutron scattering (SANS and USANS, respectively) data was performed. These light-weight and flexible materials have recently shown great promise for high-performance thermoelectric energy conversion, and their further improvement requires a thorough understanding of their structure-property relationships. The first step in achieving such understanding is to generate models that contain the hierarchy of CNT networks over nano and micron scales. The studied system is a single walled carbon nanotube (SWCNT)/poly (3,4-ethylenedioxythiophene):poly (styrene sulfonate) (PEDOT:PSS). SANS and USANS patterns of the different samples containing 10, 30, and 50 wt% SWCNTs were measured. These curves were then utilized to calculate statistical two-point correlation functions of the nanostructure. These functions along with the geometrical information extracted from SANS data and scanning electron microscopy images were used to reconstruct a representative volume element (RVE) nanostructure. Generated RVEs can be used for simulations of various mechanical and physical properties. This work, therefore, introduces a framework for the reconstruction of 3D RVEs of high volume faction nanocomposites containing high aspect ratio fillers from scattering experiments.
Orbital Wall Reconstruction with Two-Piece Puzzle 3D Printed Implants: Technical Note.
Mommaerts, Maurice Y; Büttner, Michael; Vercruysse, Herman; Wauters, Lauri; Beerens, Maikel
2016-03-01
The purpose of this article is to describe a technique for secondary reconstruction of traumatic orbital wall defects using titanium implants that act as three-dimensional (3D) puzzle pieces. We present three cases of large defect reconstruction using implants produced by Xilloc Medical B.V. (Maastricht, the Netherlands) with a 3D printer manufactured by LayerWise (3D Systems; Heverlee, Belgium), and designed using the biomedical engineering software programs ProPlan and 3-Matic (Materialise, Heverlee, Belgium). The smaller size of the implants allowed sequential implantation for the reconstruction of extensive two-wall defects via a limited transconjunctival incision. The precise fit of the implants with regard to the surrounding ledges and each other was confirmed by intraoperative 3D imaging (Mobile C-arm Systems B.V. Pulsera, Philips Medical Systems, Eindhoven, the Netherlands). The patients showed near-complete restoration of orbital volume and ocular motility. However, challenges remain, including traumatic fat atrophy and fibrosis. PMID:26889349
NASA Astrophysics Data System (ADS)
Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.
2016-02-01
Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017
NASA Astrophysics Data System (ADS)
Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella
2015-09-01
Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.
Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach
de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José
2015-01-01
This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796
Diachronic 3d Reconstruction for Lost Cultural Heritage
NASA Astrophysics Data System (ADS)
Guidi, G.; Russo, M.
2011-09-01
Cultural Heritage artifacts can often be underestimated for their hidden presence in the landscape. Such problem is particularly large in countries like Italy, where the massive amount of "famous" artifacts tends to neglect other presences unless properly exposed, or when the remains are dramatically damaged leaving very few interpretation clues to the visitor. In such cases a virtual presentation of the Cultural Heritage site can be of great help, specially for explaining the evolution of its status, giving sometimes sense to few spare stones. The definition of these digital representations deal with two crucial aspects: on the one hand the possibility of 3D surveying the relics in order to have an accurate geometrical image of the current status of the artifact; on the other hand the presence of historical sources both in form of written text or images, that once properly matched with the current geometrical data, may help to recreate digitally a set of 3D models representing visually the various historical phases (diachronic model), up to the current one. The core of this article is the definition of an integrated methodology that starts from an high-resolution digital survey of the remains of an ancient building and develops a coherent virtual reconstruction from different historical sources, suggesting a scalable method suitable to be re-used for generating a 4D (geometry + time) model of the artifact. This approach has been experimented on the "Basilica di San Giovanni in Conca" in Milan, a very significant example for its complex historic evolution that combines evident historic values with an invisible presence inside the city.
Reconstruction of 3D ion beam micro-tomography data for applications in Cell Biology
NASA Astrophysics Data System (ADS)
Habchi, C.; Nguyen, D. T.; Barberet, Ph.; Incerti, S.; Moretto, Ph.; Sakellariou, A.; Seznec, H.
2009-06-01
The DISRA (Discrete Image Space Reconstruction Algorithm) reconstruction code, created by A. Sakellariou, was conceived for the ideal case of complete three-dimensional (3D) PIXET (Particle Induced X-ray Emission Tomography) data. This implies two major difficulties for biological samples: first, the long duration of such experiments and second, the subsequent damage that occurs on such fragile specimens. For this reason, the DISRA code was extended at CENBG in order to probe isolated PIXET slices, taking into account the sample structure and mass density provided by 3D STIMT (Scanning Transmission Ion Microscopy Tomography) in the volume of interest. This modified version was tested on a phantom sample and first results on human cancer cells are also presented.
Height inspection of wafer bumps without explicit 3D reconstruction
NASA Astrophysics Data System (ADS)
Dong, Mei; Chung, Ronald; Zhao, Yang; Lam, Edmund Y.
2006-02-01
The shrunk dimension of electronic devices leads to more stringent requirement on process control and quality assurance of their fabrication. For instance, direct die-to-die bonding requires placement of solder bumps not on PCB but on the wafer itself. Such wafer solder bumps, which are much miniaturized from the counterparts on PCB, still need to have their heights meet the specification, or else the electrical connection could be compromised, or the dies be crushed, or even the manufacturing equipments be damaged. Yet the tiny size, typically tens of microns in diameter, and the textureless and mirror nature of the bumps pose great challenge to the 3D inspection process. This paper addresses how a large number of such wafer bumps could have their heights massively checked against the specification. We assume ball bumps in this work. We propose a novel inspection measure about the collection of bump heights that possesses these advantages: (1) it is sensitive to global and local disturbances to the bump heights, thus serving the bump height inspection purpose; (2) it is invariant to how individual bumps are locally displaced against one another on the substrate surface, thus enduring 2D displacement error in soldering the bumps onto the wafer substrate; and (3) it is largely invariant to how the wafer itself is globally positioned relative to the imaging system, thus having tolerance to repeatability error in wafer placement. This measure makes use of the mirror nature of the bumps, which used to cause difficulty in traditional inspection methods, to capture images of two planes. One contains the bump peaks and the other corresponds to the substrate. With the homography matrices of these two planes and fundamental matrix of the camera, we synthesize a matrix called Biplanar Disparity Matrix. This matrix can summarize the bumps' heights in a fast and direct way without going through explicit 3D reconstruction. We also present a design of the imaging and
3D TEM reconstruction and segmentation process of laminar bio-nanocomposites
Iturrondobeitia, M. Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.
2015-03-30
The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V{sub clay} (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite.
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction. PMID:25974936
Web-based volume slicer for 3D electron-microscopy data from EMDB.
Salavert-Torres, José; Iudin, Andrii; Lagerstedt, Ingvar; Sanz-García, Eduardo; Kleywegt, Gerard J; Patwardhan, Ardan
2016-05-01
We describe the functionality and design of the Volume slicer - a web-based slice viewer for EMDB entries. This tool uniquely provides the facility to view slices from 3D EM reconstructions along the three orthogonal axes and to rapidly switch between them and navigate through the volume. We have employed multiple rounds of user-experience testing with members of the EM community to ensure that the interface is easy and intuitive to use and the information provided is relevant. The impetus to develop the Volume slicer has been calls from the EM community to provide web-based interactive visualisation of 2D slice data. This would be useful for quick initial checks of the quality of a reconstruction. Again in response to calls from the community, we plan to further develop the Volume slicer into a fully-fledged Volume browser that provides integrated visualisation of EMDB and PDB entries from the molecular to the cellular scale. PMID:26876163
Volume estimation of tonsil phantoms using an oral camera with 3D imaging.
Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh
2016-04-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
Volume estimation of tonsil phantoms using an oral camera with 3D imaging
Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh
2016-01-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
SOLIDFELIX: a transportable 3D static volume display
NASA Astrophysics Data System (ADS)
Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom
2009-02-01
Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently
High performance computing approaches for 3D reconstruction of complex biological specimens.
da Silva, M Laura; Roca-Piera, Javier; Fernández, José-Jesús
2010-01-01
Knowledge of the structure of specimens is crucial to determine the role that they play in cellular and molecular biology. To yield the three-dimensional (3D) reconstruction by means of tomographic reconstruction algorithms, we need the use of large projection images and high processing time. Therefore, we propose the use of the high performance computing (HPC) to cope with the huge computational demands of this problem. We have implemented a HPC strategy where the distribution of tasks follows the master-slave paradigm. The master processor distributes a slab of slices, a piece of the final 3D structure to reconstruct, among the slave processors and receives reconstructed slices of the volume. We have evaluated the performance of our HPC approach using different sizes of the slab. We have observed that it is possible to find out an optimal size of the slab for the number of processor used that minimize communications time while maintaining a reasonable grain of parallelism to be exploited by the set of processors. PMID:20865517
NASA Astrophysics Data System (ADS)
Monserrat, Carlos; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Albalat, Salvador E.
1997-05-01
This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.
Reconstruction Error of Calibration Volume’s Coordinates for 3D Swimming Kinematics
Figueiredo, Pedro; Machado, Leandro; Vilas-Boas, João Paulo; Fernandes, Ricardo J.
2011-01-01
The aim of this study was to investigate the accuracy and reliability of above and underwater 3D reconstruction of three calibration volumes with different control points disposal (#1 - on vertical and horizontal rods; #2 - on vertical and horizontal rods and facets; #3 - on crossed horizontal rods). Each calibration volume (3 × 2 × 3 m) was positioned in a 25 m swimming pool (half above and half below the water surface) and recorded with four underwater and two above water synchronised cameras (50 Hz). Reconstruction accuracy was determined calculating the RMS error of twelve validation points. The standard deviation across all digitisation of the same marker was used for assessing the reliability estimation. Comparison among different number of control points showed that the set of 24 points produced the most accurate results. The volume #2 presented higher accuracy (RMS errors: 5.86 and 3.59 mm for x axis, 3.45 and 3.11 mm for y axis and 4.38 and 4.00 mm for z axis, considering under and above water, respectively) and reliability (SD: underwater cameras ± [0.2; 0.6] mm; above water cameras ± [0.2; 0.3] mm) that may be considered suitable for 3D swimming kinematic analysis. Results revealed that RMS error was greater during underwater analysis, possibly due to refraction. PMID:23486761
DIII-D Equilibrium Reconstructions with New 3D Magnetic Probes
NASA Astrophysics Data System (ADS)
Lao, Lang; Strait, E. J.; Ferraro, N. M.; Ferron, J. R.; King, J. D.; Lee, X.; Meneghini, O.; Turnbull, A. D.; Huang, Y.; Qian, J. G.; Wingen, A.
2015-11-01
DIII-D equilibrium reconstructions with the recently installed new 3D magnetic diagnostic are presented. In addition to providing information to allow more accurate 2D reconstructions, the new 3D probes also provide useful information to guide computation of 3D perturbed equilibria. A new more comprehensive magnetic compensation has been implemented. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria making use of the new 3D probes and plasma responses from 3D MHD codes such as GATO and M3D-C1. To improve the computation efficiency, all inactive probes in one of the toroidal planes in EFIT have been replaced with new probes from other planes. Other 3D efforts include testing of 3D reconstructions using V3FIT and a new 3D variational moment equilibrium code VMOM3D. Other EFIT developments include a GPU EFIT version and new safety factor and MSE-LS constraints. The accuracy and limitation of the new probes for 3D reconstructions will be discussed. Supported by US DOE under DE-FC02-04ER54698 and DE-FG02-95ER54309.
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
3D virtual colonoscopy with real-time volume rendering
NASA Astrophysics Data System (ADS)
Wan, Ming; Li, Wei J.; Kreeger, Kevin; Bitter, Ingmar; Kaufman, Arie E.; Liang, Zhengrong; Chen, Dongqing; Wax, Mark R.
2000-04-01
In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.
2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment
NASA Astrophysics Data System (ADS)
Bifulco, P.; Cesarelli, M.; Allen, R.; Romano, M.; Fratini, A.; Pasquariello, G.
2009-12-01
This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Reconstruction of 3D structure using stochastic methods: morphology and transport properties
NASA Astrophysics Data System (ADS)
Karsanina, Marina; Gerke, Kirill; Čapek, Pavel; Vasilyev, Roman; Korost, Dmitry; Skvortsova, Elena
2013-04-01
One of the main factors defining numerous flow phenomena in rocks, soils and other porous media, including fluid and solute movements, is pore structure, e.g., pore sizes and their connectivity. Numerous numerical methods were developed to quantify single and multi-phase flow in such media on microscale. Among most popular ones are: 1) a wide range of finite difference/element/volume solutions of Navier-Stokes equations and its simplifications; 2) lattice-Boltzmann method; 3) pore-network models, among others. Each method has some advantages and shortcomings, so that different research teams usually utilize more than one, depending on the study case. Recent progress in 3D imaging of internal structure, e.g., X-ray tomography, FIB-SEM and confocal microscopy, made it possible to obtain digitized input pore parameters for such models, however, a trade-off between resolution and sample size is usually unavoidable. There are situations then only standard two-dimensional information of porous structure is known due to tomography high cost or resolution limitations. However, physical modeling on microscale requires 3D information. There are three main approaches to reconstruct (using 2D cut(s) or some other limited information/properties) porous media: 1) statistical methods (correlation functions and simulated annealing, multi-point statistics, entropy methods), 2) sequential methods (sphere or other granular packs) and 3) morphological methods. Stochastic reconstructions using correlation functions possess some important advantage - they provide a statistical description of the structure, which is known to have relationships with all physical properties. In addition, this method is more flexible for other applications to characterize porous media. Taking different 3D scans of natural and artificial porous materials (sandstones, soils, shales, ceramics) we choose some 2D cut/s as sources of input correlation functions. Based on different types of correlation functions
3D reconstruction of SEM images by use of optical photogrammetry software.
Eulitz, Mona; Reiss, Gebhard
2015-08-01
Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969
Automatic 3-D grayscale volume matching and shape analysis.
Guétat, Grégoire; Maitre, Matthieu; Joly, Laurène; Lai, Sen-Lin; Lee, Tzumin; Shinagawa, Yoshihisa
2006-04-01
Recently, shape matching in three dimensions (3-D) has been gaining importance in a wide variety of fields such as computer graphics, computer vision, medicine, and biology, with applications such as object recognition, medical diagnosis, and quantitative morphological analysis of biological operations. Automatic shape matching techniques developed in the field of computer graphics handle object surfaces, but ignore intensities of inner voxels. In biology and medical imaging, voxel intensities obtained by computed tomography (CT), magnetic resonance imagery (MRI), and confocal microscopes are important to determine point correspondences. Nevertheless, most biomedical volume matching techniques require human interactions, and automatic methods assume matched objects to have very similar shapes so as to avoid combinatorial explosions of point. This article is aimed at decreasing the gap between the two fields. The proposed method automatically finds dense point correspondences between two grayscale volumes; i.e., finds a correspondent in the second volume for every voxel in the first volume, based on the voxel intensities. Mutiresolutional pyramids are introduced to reduce computational load and handle highly plastic objects. We calculate the average shape of a set of similar objects and give a measure of plasticity to compare them. Matching results can also be used to generate intermediate volumes for morphing. We use various data to validate the effectiveness of our method: we calculate the average shape and plasticity of a set of fly brain cells, and we also match a human skull and an orangutan skull. PMID:16617625
APPROXIMATION OF SURFACES IN QUANTITATIVE 3-D RECONSTRUCTIONS
In serial section reconstructions a series of planar profiles are taken representing curves on the surface of the structure to be reconstructed. or a number of quantitative serial section methods, approximation of a surface is done by the formation of tiles between points of adja...
A Gauss-Seidel Iteration Scheme for Reference-Free 3-D Histological Image Reconstruction
Daum, Volker; Steidl, Stefan; Maier, Andreas; Köstler, Harald; Hornegger, Joachim
2015-01-01
Three-dimensional (3-D) reconstruction of histological slice sequences offers great benefits in the investigation of different morphologies. It features very high-resolution which is still unmatched by in-vivo 3-D imaging modalities, and tissue staining further enhances visibility and contrast. One important step during reconstruction is the reversal of slice deformations introduced during histological slice preparation, a process also called image unwarping. Most methods use an external reference, or rely on conservative stopping criteria during the unwarping optimization to prevent straightening of naturally curved morphology. Our approach shows that the problem of unwarping is based on the superposition of low-frequency anatomy and high-frequency errors. We present an iterative scheme that transfers the ideas of the Gauss-Seidel method to image stacks to separate the anatomy from the deformation. In particular, the scheme is universally applicable without restriction to a specific unwarping method, and uses no external reference. The deformation artifacts are effectively reduced in the resulting histology volumes, while the natural curvature of the anatomy is preserved. The validity of our method is shown on synthetic data, simulated histology data using a CT data set and real histology data. In the case of the simulated histology where the ground truth was known, the mean Target Registration Error (TRE) between the unwarped and original volume could be reduced to less than 1 pixel on average after 6 iterations of our proposed method. PMID:25312918
Osewski, Wojciech; Dolla, Łukasz; Radwan, Michał; Szlag, Marta; Rutkowski, Roman; Smolińska, Barbara; Ślosarek, Krzysztof
2014-01-01
Aim To present practical examples of our new algorithm for reconstruction of 3D dose distribution, based on the actual MLC leaf movement. Background DynaLog and RTplan files were used by DDcon software to prepare a new RTplan file for dose distribution reconstruction. Materials and methods Four different clinically relevant scenarios were used to assess the feasibility of the proposed new approach: (1) Reconstruction of whole treatment sessions for prostate cancer; (2) Reconstruction of IMRT verification treatment plan; (3) Dose reconstruction in breast cancer; (4) Reconstruction of interrupted arc and complementary plan for an interrupted VMAT treatment session of prostate cancer. The applied reconstruction method was validated by comparing reconstructed and measured fluence maps. For all statistical analysis, the U Mann–Whitney test was used. Results In the first two and the fourth cases, there were no statistically significant differences between the planned and reconstructed dose distribution (p = 0.910, p = 0.975, p = 0.893, respectively). In the third case the differences were statistically significant (p = 0.015). Treatment plan had to be reconstructed. Conclusion Developed dose distribution reconstruction algorithm presents a very useful QA tool. It provides means for 3D dose distribution verification in patient volume and allows to evaluate the influence of actual MLC leaf motion on the dose distribution. PMID:25337416
Fringe projection profilometry for panoramic 3D reconstruction
NASA Astrophysics Data System (ADS)
Almaraz-Cabral, César-Cruz; Gonzalez-Barbosa, José-Joel; Villa, Jesús; Hurtado-Ramos, Juan-Bautista; Ornelas-Rodriguez, Francisco-Javier; Córdova-Esparza, Diana-Margarita
2016-03-01
In this paper, we introduce a panoramic profilometric system to reconstruct inner cylindrical environments. The system projects circular fringes and uses a temporal phase unwrapping technique. The recovered phase map is used to reconstruct objects placed on the inner cylindrical surface. We derived a phase to depth conversion formula for this system. The use of fringe projection allows dense reconstructions. The panoramic system is composed by a digital projector, two parabolic mirrors and a CCD camera. All these components share a common axis with a reference cylinder. This paper presents results for distinct objects.
The New Approach to Sport Medicine: 3-D Reconstruction
ERIC Educational Resources Information Center
Ince, Alparslan
2015-01-01
The aim of this study is to present a new approach to sport medicine. Comparative analysis of the Vertebrae Lumbales was done in sedentary group and Muay Thai athletes. It was done by acquiring three dimensional (3-D) data and models through photogrammetric methods from the Multi-detector Computerized Tomography (MDCT) images of the Vertebrae…
Automated reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.
Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.
3-D Virtual and Physical Reconstruction of Bendego Iron
NASA Astrophysics Data System (ADS)
Belmonte, S. L. R.; Zucolotto, M. E.; Fontes, R. C.; dos Santos, J. R. L.
2012-09-01
The use of 3D laser scanning to meteoritic to preserve the original shape of the meteorites before cutting and the facility of saved the datas in STL format (stereolithography) to print three-dimensional physical models and generate a digital replica.
Robust 3D reconstruction system for human jaw modeling
NASA Astrophysics Data System (ADS)
Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.
1999-03-01
This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376
Thermal infrared exploitation for 3D face reconstruction
NASA Astrophysics Data System (ADS)
Abayowa, Bernard O.
2009-05-01
Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.
3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine
NASA Astrophysics Data System (ADS)
Hamamoto, Kazuhiko; Sato, Motoyoshi
3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.
Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.
Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed
2009-06-01
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272
Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction
NASA Astrophysics Data System (ADS)
Austin, Christian D.; Moses, Randolph L.
2006-05-01
This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.
Single view-based 3D face reconstruction robust to self-occlusion
NASA Astrophysics Data System (ADS)
Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie
2012-12-01
State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.
3D reconstruction of tropospheric cirrus clouds by stereovision system
NASA Astrophysics Data System (ADS)
Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid
2016-07-01
A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.
Subramanian, K R; Thubrikar, M J; Fowler, B; Mostafavi, M T; Funk, M W
2000-01-01
We present a technique that accurately reconstructs complex three dimensional blood vessel geometry from 2D intravascular ultrasound (IVUS) images. Biplane x-ray fluoroscopy is used to image the ultrasound catheter tip at a few key points along its path as the catheter is pulled through the blood vessel. An interpolating spline describes the continuous catheter path. The IVUS images are located orthogonal to the path, resulting in a non-uniform structured scalar volume of echo densities. Isocontour surfaces are used to view the vessel geometry, while transparency and clipping enable interactive exploration of interior structures. The two geometries studied are a bovine artery vascular graft having U-shape and a constriction, and a canine carotid artery having multiple branches and a constriction. Accuracy of the reconstructions is established by comparing the reconstructions to (1) silicone moulds of the vessel interior, (2) biplane x-ray images, and (3) the original echo images. Excellent shape and geometry correspondence was observed in both geometries. Quantitative measurements made at key locations of the 3D reconstructions also were in good agreement with those made in silicone moulds. The proposed technique is easily adoptable in clinical practice, since it uses x-rays with minimal exposure and existing IVUS technology. PMID:11105284
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
Method for 3D fibre reconstruction on a microrobotic platform.
Hirvonen, J; Myllys, M; Kallio, P
2016-07-01
Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. PMID:26695385
3D model tools for architecture and archaeology reconstruction
NASA Astrophysics Data System (ADS)
Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice
2016-06-01
The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.
Optic flow aided navigation and 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Rollason, Malcolm
2013-10-01
An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.
Quantitative Reconstructions of 3D Chemical Nanostructures in Nanowires.
Rueda-Fonseca, P; Robin, E; Bellet-Amalric, E; Lopez-Haro, M; Den Hertog, M; Genuist, Y; André, R; Artioli, A; Tatarenko, S; Ferrand, D; Cibert, J
2016-03-01
Energy dispersive X-ray spectrometry is used to extract a quantitative 3D composition profile of heterostructured nanowires. The analysis of hypermaps recorded along a limited number of projections, with a preliminary calibration of the signal associated with each element, is compared to the intensity profiles calculated for a model structure with successive shells of circular, elliptic, or faceted cross sections. This discrete tomographic technique is applied to II-VI nanowires grown by molecular beam epitaxy, incorporating ZnTe and CdTe and their alloys with Mn and Mg, with typical size down to a few nanometers and Mn or Mg content as low as 10%. PMID:26837636
NASA Astrophysics Data System (ADS)
Liu, Qi; Ge, Yi Nan; Wang, Tian Fu; Zheng, Chang Qiong; Zheng, Yi
2005-10-01
Based on the two-dimensional color Doppler image in this article, multilane transesophageal rotational scanning method is used to acquire original Doppler echocardiography while echocardiogram is recorded synchronously. After filtering and interpolation, the surface rendering and volume rendering methods are performed. Through analyzing the color-bar information and the color Doppler flow image's superposition principle, the grayscale mitral anatomical structure and color-coded regurgitation velocity parameter were separated from color Doppler flow images, three-dimensional reconstruction of mitral structure and regurgitation velocity distribution was implemented separately, fusion visualization of the reconstructed regurgitation velocity distribution parameter with its corresponding 3D mitral anatomical structures was realized, which can be used in observing the position, phase, direction and measuring the jet length, area, volume, space distribution and severity level of the mitral regurgitation. In addition, in patients with eccentric mitral regurgitation, this new modality overcomes the inherent limitations of two-dimensional color Doppler flow image by depicting the full extent of the jet trajectory, the area of eccentric regurgitation on three-dimensional image was much larger than that on two-dimensional image, the area variation tendency and volume variation tendency of regurgitation have been shown in figure at different angle and different systolic phase. The study shows that three-dimensional color Doppler provides quantitative measurements of eccentric mitral regurgitation that are more accurate and reproducible than conventional color Doppler.
3D reconstruction software comparison for short sequences
NASA Astrophysics Data System (ADS)
Strupczewski, Adam; Czupryński, BłaŻej
2014-11-01
Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.
Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2016-06-01
In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.
3D reconstruction with two webcams and a laser line projector
NASA Astrophysics Data System (ADS)
Li, Dongdong; Hui, Bingwei; Qiu, Shaohua; Wen, Gongjian
2014-09-01
Three-dimensional (3D) reconstruction is one of the most attractive research topics in photogrammetry and computer vision. Nowadays 3D reconstruction with simple and consumable equipment plays an important role. In this paper, a 3D reconstruction desktop system is built based on binocular stereo vision using a laser scanner. The hardware requirements are a simple commercial hand-held laser line projector and two common webcams for image acquisition. Generally, 3D reconstruction based on passive triangulation methods requires point correspondences among various viewpoints. The development of matching algorithms remains a challenging task in computer vision. In our proposal, with the help of a laser line projector, stereo correspondences are established robustly from epipolar geometry and the laser shadow on the scanned object. To establish correspondences more conveniently, epipolar rectification is employed using Bouguet's method after stereo calibration with a printed chessboard. 3D coordinates of the observed points are worked out with rayray triangulation and reconstruction outliers are removed with the planarity constraint of the laser plane. Dense 3D point clouds are derived from multiple scans under different orientations. Each point cloud is derived by sweeping the laser plane across the object requiring 3D reconstruction. The Iterative Closest Point algorithm is employed to register the derived point clouds. Rigid body transformation between neighboring scans is obtained to get the complete 3D point cloud. Finally polygon meshes are reconstructed from the derived point cloud and color images are used in texture mapping to get a lifelike 3D model. Experiments show that our reconstruction method is simple and efficient.
Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration
NASA Astrophysics Data System (ADS)
Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.
2012-02-01
The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.
Registration of 3D spectral OCT volumes using 3D SIFT feature point matching
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan
2009-02-01
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
Assist feature printability prediction by 3-D resist profile reconstruction
NASA Astrophysics Data System (ADS)
Zheng, Xin; Huang, Jensheng; Chin, Fook; Kazarian, Aram; Kuo, Chun-Chieh
2012-06-01
properties may then be used to optimize the printability vs. efficacy of an SRAF either prior to or during an Optical Proximity Correction (OPC) run. The process models that are used during OPC have never been able to reliably predict which SRAFs will print. This appears to be due to the fact that OPC process models are generally created using data that does not include printed subresolution patterns. An enhancement to compact modeling capability to predict Assist Features (AF) printability is developed and discussed. A hypsometric map representing 3-D resist profile was built by applying a first principle approximation to estimate the "energy loss" from the resist top to bottom. Such a 3-D resist profile is an extrapolation of a well calibrated traditional OPC model without any additional information. Assist features are detected at either top of resist (dark field) or bottom of resist (bright field). Such detection can be done by just extracting top or bottom resist models from our 3-D resist model. There is no measurement of assist features needed when we build AF but it can be included if interested but focusing on resist calibration to account for both exposure dosage and focus change sensitivities. This approach significantly increases resist model's capability for predicting printed SRAF accuracy. And we don't need to calibrate an SRAF model in addition to the OPC model. Without increase in computation time, this compact model can draw assist feature contour with real placement and size at any vertical plane. The result is compared and validated with 3-D rigorous modeling as well as SEM images. Since this method does not change any form of compact modeling, it can be integrated into current MBAF solutions without any additional work.
3D digital breast tomosynthesis image reconstruction using anisotropic total variation minimization.
Seyyedi, Saeed; Yildirim, Isa
2014-01-01
This paper presents a compressed sensing based reconstruction method for 3D digital breast tomosynthesis (DBT) imaging. Algebraic reconstruction technique (ART) has been in use in DBT imaging by minimizing the isotropic total variation (TV) of the reconstructed image. The resolution in DBT differs in sagittal and axial directions which should be encountered during the TV minimization. In this study we develop a 3D anisotropic TV (ATV) minimization by considering the different resolutions in different directions. A customized 3D Shepp-logan phantom was generated to mimic a real DBT image by considering the overlapping tissue and directional resolution issues. Results of the ART, ART+3D TV and ART+3D ATV are compared using structural similarity (SSIM) diagram. PMID:25571377
3D parameter reconstruction in hyperspectral diffuse optical tomography
NASA Astrophysics Data System (ADS)
Saibaba, Arvind K.; Krishnamurthy, Nishanth; Anderson, Pamela G.; Kainerstorfer, Jana M.; Sassaroli, Angelo; Miller, Eric L.; Fantini, Sergio; Kilmer, Misha E.
2015-03-01
The imaging of shape perturbation and chromophore concentration using Diffuse Optical Tomography (DOT) data can be mathematically described as an ill-posed and non-linear inverse problem. The reconstruction algorithm for hyperspectral data using a linearized Born model is prohibitively expensive, both in terms of computation and memory. We model the shape of the perturbation using parametric level-set approach (PaLS). We discuss novel computational strategies for reducing the computational cost based on a Krylov subspace approach for parameteric linear systems and a compression strategy for the parameter-to-observation map. We will demonstrate the validity of our approach by comparison with experiments.
Robust registration for removing vibrations in 3D reconstruction of web material
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Garcia, Daniel F.
2015-05-01
Vibrations are a major challenge in laser-based 3D reconstruction of web material. In uncontrolled environments, the movement of web material forward along a track is inevitably affected by vibrations. These oscillations significantly degrade the performance of the 3D reconstruction system, as they are incorrectly interpreted as irregularities on the surface of the material, leading to an erroneous reconstruction of the 3D surface. This work proposes a method to estimate and remove these vibrations based on a robust registration procedure. Registration is used to estimate vibrations and a rigid transformation is used to compensate the movements, removing the effects of vibrations on 3D reconstruction. The proposed method is applied to an extensive dataset, both synthetic and real, with very good results.
NASA Astrophysics Data System (ADS)
Vallet, B.; Soheilian, B.; Brédif, M.
2014-08-01
The 3D reconstruction of similar 3D objects detected in 2D faces a major issue when it comes to grouping the 2D detections into clusters to be used to reconstruct the individual 3D objects. Simple clustering heuristics fail as soon as similar objects are close. This paper formulates a framework to use the geometric quality of the reconstruction as a hint to do a proper clustering. We present a methodology to solve the resulting combinatorial optimization problem with some simplifications and approximations in order to make it tractable. The proposed method is applied to the reconstruction of 3D traffic signs from their 2D detections to demonstrate its capacity to solve ambiguities.
Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; White, Stuart C.
1992-05-01
This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.
Bayesian 3D velocity field reconstruction with VIRBIUS
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem
2016-03-01
I describe a new Bayesian-based algorithm to infer the full three dimensional velocity field from observed distances and spectroscopic galaxy catalogues. In addition to the velocity field itself, the algorithm reconstructs true distances, some cosmological parameters and specific non-linearities in the velocity field. The algorithm takes care of selection effects, miscalibration issues and can be easily extended to handle direct fitting of e.g. the inverse Tully-Fisher relation. I first describe the algorithm in details alongside its performances. This algorithm is implemented in the VIRBIUS (VelocIty Reconstruction using Bayesian Inference Software) software package. I then test it on different mock distance catalogues with a varying complexity of observational issues. The model proved to give robust measurement of velocities for mock catalogues of 3000 galaxies. I expect the core of the algorithm to scale to tens of thousands galaxies. It holds the promises of giving a better handle on future large and deep distance surveys for which individual errors on distance would impede velocity field inference.
Reconstructing 3-D Ship Motion for Synthetic Aperture Sonar Processing
NASA Astrophysics Data System (ADS)
Thomsen, D. R.; Chadwell, C. D.; Sandwell, D.
2004-12-01
We are investigating the feasibility of coherent ping-to-ping processing of multibeam sonar data for high-resolution mapping and change detection in the deep ocean. Theoretical calculations suggest that standard multibeam resolution can be improved from 100 m to ~10 m through coherent summation of pings similar to synthetic aperture radar image formation. A requirement for coherent summation of pings is to correct the phase of the return echoes to an accuracy of ~3 cm at a sampling rate of ~10 Hz. In September of 2003, we conducted a seagoing experiment aboard R/V Revelle to test these ideas. Three geodetic-quality GPS receivers were deployed to recover 3-D ship motion to an accuracy of +- 3cm at a 1 Hz sampling rate [Chadwell and Bock, GRL, 2001]. Additionally, inertial navigation data (INS) from fiber-optic gyroscopes and pendulum-type accelerometers were collected at a 10 Hz rate. Independent measurements of ship orientation (yaw, pitch, and roll) from the GPS and INS show agreement to an RMS accuracy of better than 0.1 degree. Because inertial navigation hardware is susceptible to drift, these measurements were combined with the GPS to achieve both high accuracy and high sampling rate. To preserve the short-timescale accuracy of the INS and the long-timescale accuracy of the GPS measurements, time-filtered differences between the GPS and INS were subtracted from the INS integrated linear velocities. An optimal filter length of 25 s was chosen to force the RMS difference between the GPS and the integrated INS to be on the order of the accuracy of the GPS measurements. This analysis provides an upper bound on 3-D ship motion accuracy. Additionally, errors in the attitude can translate to the projections of motion for individual hydrophones. With lever arms on the order of 5m, these errors will likely be ~1mm. Based on these analyses, we expect to achieve the 3-cm accuracy requirement. Using full-resolution hydrophone data collected by a SIMRAD EM/120 echo sounder
Application of 3D reconstruction for surgical treatment of hepatic alveolar echinococcosis
He, Yi-Biao; Bai, Lei; Aji, Tuerganaili; Jiang, Yi; Zhao, Jin-Ming; Zhang, Jin-Hui; Shao, Ying-Mei; Liu, Wen-Ya; Wen, Hao
2015-01-01
AIM: To evaluate the reliability and accuracy of three-dimensional (3D) reconstruction for liver resection in patients with hepatic alveolar echinococcosis (HAE). METHODS: One-hundred and six consecutive patients with HAE underwent hepatectomy at our hospital between May 2011 and January 2015. Fifty-nine patients underwent preoperative 3D reconstruction and “virtual” 3D liver resection before surgery (Group A). Another 47 patients used conventional imaging methods for preoperative assessment (Group B). Outcomes of hepatectomy were compared between the two groups. RESULTS: There was no significant difference in preoperative data between the two groups. Compared with patients in Group B, those in Group A had a significantly shorter operation time (227.1 ± 51.4 vs 304.6 ± 88.1 min; P < 0.05), less intraoperative blood loss (308.1 ± 135.4 vs 458.1 ± 175.4 mL; P < 0.05), and lower requirement for intraoperative blood transfusion (186.4 ± 169.6 vs 289.4 ± 199.2 mL; P < 0.05). Estimated resection liver volumes in both groups had good correlation with actual graft weight (Group A: r = 0.978; Group B: r = 0.960). There was a significant higher serum level of albumin in Group A (26.3 ± 5.9 vs 22.6 ± 4.3 g/L, P < 0.05). Other postoperative laboratory parameters (serum levels of aminotransferase and bilirubin; prothrombin time) and duration of postoperative hospital stay were similar. Sixteen complications occurred in Group A and 19 in Group B. All patients were followed for 3-46 (mean, 17.3) mo. There was no recurrence of lesions in Group A, but two recurrences in Group B. There were three deaths: two from cerebrovascular accident, and one from car accident. CONCLUSION: 3D reconstruction provides comprehensive and precise anatomical information for the liver. It also improves the chance of success and reduces the risk of hepatectomy in HAE. PMID:26401085
[3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].
Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu
2015-08-01
The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449
Image-Based 3d Reconstruction and Analysis for Orthodontia
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2012-08-01
Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.
3D surface reconstruction based on image stitching from gastric endoscopic video sequence
NASA Astrophysics Data System (ADS)
Duan, Mengyao; Xu, Rong; Ohya, Jun
2013-09-01
This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.
Automatic Texture Reconstruction of 3d City Model from Oblique Images
NASA Astrophysics Data System (ADS)
Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang
2016-06-01
In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.
Roles of equalization in radar imaging: modeling for superesolution in 3D reconstruction
NASA Astrophysics Data System (ADS)
Merched, Ricardo
2012-12-01
In radar imaging, resolution is generally dictated by its corresponding system point spread function, the response to a point source as a result of an external excitation. This notion of resolution turns out to be rather questionable, as the interpretation of echoes received from a range of continuous targets according to a linear model allows one to cast the imaging problem as a communication system that maps the target reflectivity function onto measurements, which in turn suggests that by virtue of sampling and equalization, one can achieve unlimited spatial resolution. This article reviews the fundamental problem inherent to pulse compression in a multistatic multi-input-multi-output (MIMO) scenario, from a communications viewpoint, in both focused and un-focused scenarios. We generalize the notion of 1D range compression and replace it by a more general 4D pulse compression. The process of focusing and scanning over a 3D object can be interpreted as a MIMO 4D convolution between a reflectivity tensor and a space-varying system, which naturally induces a 4D MIMO channel convolution model. This implies that several well-established block and linear equalization methods can be easily extended to a 3D scenario with the purpose of achieving exact reconstruction of a given reflectivity volume. That is, assuming that no multiple scattering occurs, resolution is only limited in range by the sampling device in the unfocused case, while unlimited in case of focusing at multiple depths. Exact reconstruction under a zero-forcing or least-squares criterion depends solely on the amount of diversity induced by sampling in both space (via scanning rate) and time (via sampling rate), which further allows for a tradeoff between range and cross-range resolution. For instance, the fastest scanning rate is achieved by steering non overlapping beams, in which case portions of the object can be reconstructed independently from each other.
Nguyen, Duc V; Vo, Quang N; Le, Lawrence H; Lou, Edmond H M
2015-02-01
Adolescent idiopathic scoliosis (AIS) is a three-dimensional deformity of spine associated with vertebra rotation. The Cobb angle and axial vertebral rotation are important parameters to assess the severity of scoliosis. However, the vertebral rotation is seldom measured from radiographs due to time consuming. Different techniques have been developed to extract 3D spinal information. Among many techniques, ultrasound imaging is a promising method. This pilot study reported an image processing method to reconstruct the posterior surface of vertebrae from 3D ultrasound data. Three cadaver vertebrae, a Sawbones spine phantom, and a spine from a child with AIS were used to validate the development. The in-vitro result showed the surface of the reconstructed image was visually similar to the original objects. The dimension measurement error was <5 mm and the Pearson correlation was >0.99. The results also showed a high accuracy in vertebral rotation with errors of 0.8 ± 0.3°, 2.8 ± 0.3° and 3.6 ± 0.5° for the rotation values of 0°, 15° and 30°, respectively. Meanwhile, the difference in the Cobb angle between the phantom and the image was 4° and the vertebral rotation at the apex was 2°. The Cobb angle measured from the in-vivo ultrasound image was 4° different from the radiograph. PMID:25550193
3D Surface Reconstruction of Rills in a Spanish Olive Grove
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.
2016-04-01
The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.
Lorintiu, Oana; Liebgott, Hervé; Alessandrini, Martino; Bernard, Olivier; Friboulet, Denis
2015-12-01
In this paper we present a compressed sensing (CS) method adapted to 3D ultrasound imaging (US). In contrast to previous work, we propose a new approach based on the use of learned overcomplete dictionaries that allow for much sparser representations of the signals since they are optimized for a particular class of images such as US images. In this study, the dictionary was learned using the K-SVD algorithm and CS reconstruction was performed on the non-log envelope data by removing 20% to 80% of the original data. Using numerically simulated images, we evaluate the influence of the training parameters and of the sampling strategy. The latter is done by comparing the two most common sampling patterns, i.e., point-wise and line-wise random patterns. The results show in particular that line-wise sampling yields an accuracy comparable to the conventional point-wise sampling. This indicates that CS acquisition of 3D data is feasible in a relatively simple setting, and thus offers the perspective of increasing the frame rate by skipping the acquisition of RF lines. Next, we evaluated this approach on US volumes of several ex vivo and in vivo organs. We first show that the learned dictionary approach yields better performances than conventional fixed transforms such as Fourier or discrete cosine. Finally, we investigate the generality of the learned dictionary approach and show that it is possible to build a general dictionary allowing to reliably reconstruct different volumes of different ex vivo or in vivo organs. PMID:26057610
An interface reconstruction method based on an analytical formula for 3D arbitrary convex cells
NASA Astrophysics Data System (ADS)
Diot, Steven; François, Marianne M.
2016-01-01
In this paper, we are interested in an interface reconstruction method for 3D arbitrary convex cells that could be used in multi-material flow simulations for instance. We assume that the interface is represented by a plane whose normal vector is known and we focus on the volume-matching step that consists in finding the plane constant so that it splits the cell according to a given volume fraction. We follow the same approach as in the recent authors' publication for 2D arbitrary convex cells in planar and axisymmetrical geometries, namely we derive an analytical formula for the volume of the specific prismatoids obtained when decomposing the cell using the planes that are parallel to the interface and passing through all the cell nodes. This formula is used to bracket the interface plane constant such that the volume-matching problem is rewritten in a single prismatoid in which the same formula is used to find the final solution. The proposed method is tested against an important number of reproducible configurations and shown to be at least five times faster.
NASA Astrophysics Data System (ADS)
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-01
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 5123 to 81923 voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and Ht (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume "Shepp and Logan" in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.
NASA Astrophysics Data System (ADS)
Yang, R.; Song, A.; Li, X. D.; Lu, Y.; Yan, R.; Xu, B.; Li, X.
2014-10-01
A 3D reconstruction solution to ultrasound Joule heat density tomography based on acousto-electric effect by deconvolution is proposed for noninvasive imaging of biological tissue. Compared with ultrasound current source density imaging, ultrasound Joule heat density tomography doesn't require any priori knowledge of conductivity distribution and lead fields, so it can gain better imaging result, more adaptive to environment and with wider application scope. For a general 3D volume conductor with broadly distributed current density field, in the AE equation the ultrasound pressure can't simply be separated from the 3D integration, so it is not a common modulation and basebanding (heterodyning) method is no longer suitable to separate Joule heat density from the AE signals. In the proposed method the measurement signal is viewed as the output of Joule heat density convolving with ultrasound wave. As a result, the internal 3D Joule heat density can be reconstructed by means of Wiener deconvolution. A series of computer simulations set for breast cancer imaging applications, with consideration of ultrasound beam diameter, noise level, conductivity contrast, position dependency and size of simulated tumors, have been conducted to evaluate the feasibility and performance of the proposed reconstruction method. The computer simulation results demonstrate that high spatial resolution 3D ultrasound Joule heat density imaging is feasible using the proposed method, and it has potential applications to breast cancer detection and imaging of other organs.
Visualization of 3D elbow kinematics using reconstructed bony surfaces
NASA Astrophysics Data System (ADS)
Lalone, Emily A.; McDonald, Colin P.; Ferreira, Louis M.; Peters, Terry M.; King, Graham J. W.; Johnson, James A.
2010-02-01
An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator. Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.
3D Reconstruction of a Rotating Erupting Prominence
NASA Technical Reports Server (NTRS)
Thompson, W. T.; Kliem, B.; Torok, T.
2011-01-01
A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight as it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 Angstrom line in EUVI and in the H alpha-sensitive bandpass of COR1 by both STEREO Ahead and Behind. These features could then be traced out in three dimensional space, and reprojected into a view in which the eruption is directed towards the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equals 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg. from the original filament orientation inferred from H alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation is reached within approximately equals 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.
3D Reconstruction of a Rotating Erupting Prominence
NASA Technical Reports Server (NTRS)
Thompson, W. T.; Kliem, B.; Toeroek, T.
2011-01-01
A bright prominence associated with a coronal mass ejection (CME) was seen erupting from the Sun on 9 April 2008. This prominence was tracked by both the Solar Terrestrial Relations Observatory (STEREO) EUVI and COR1 telescopes, and was seen to rotate about the line of sight a it erupted; therefore, the event has been nicknamed the "Cartwheel CME." The threads of the prominence in the core of the CME quite clearly indicate the structure of a weakly to moderately twisted flux rope throughout the field of view, up to heliocentric heights of 4 solar radii. Although the STEREO separation was 48 deg, it was possible to match some sharp features in the later part of the eruption as seen in the 304 A line in EUVI and in the H-alpha-sensitive bandpass of COR I by both STEREO Ahead and Behind. These features could then be traced out in three-dimensional space, and reprojected into a view in which the eruption is directed toward the observer. The reconstructed view shows that the alignment of the prominence to the vertical axis rotates as it rises up to a leading-edge height of approximately equal to 2.5 solar radii, and then remains approximately constant. The alignment at 2.5 solar radii differs by about 115 deg from the original filament orientation inferred from H-alpha and EUV data, and the height profile of the rotation, obtained here for the first time, shows that two thirds of the total rotation are reached within approximately equal to 0.5 solar radii above the photosphere. These features are well reproduced by numerical simulations of an unstable moderately twisted flux rope embedded in external flux with a relatively strong shear field component.
Near-infrared optical imaging of human brain based on the semi-3D reconstruction algorithm
NASA Astrophysics Data System (ADS)
Liu, Ming; Meng, Wei; Qin, Zhuanping; Zhou, Xiaoqing; Zhao, Huijuan; Gao, Feng
2013-03-01
In the non-invasive brain imaging with near-infrared light, precise head model is of great significance to the forward model and the image reconstruction. To deal with the individual difference of human head tissues and the problem of the irregular curvature, in this paper, we extracted head structure with Mimics software from the MRI image of a volunteer. This scheme makes it possible to assign the optical parameters to every layer of the head tissues reasonably and solve the diffusion equation with the finite-element analysis. During the solution of the inverse problem, a semi-3D reconstruction algorithm is adopted to trade off the computation cost and accuracy between the full 3-D and the 2-D reconstructions. In this scheme, the changes in the optical properties of the inclusions are assumed either axially invariable or confined to the imaging plane, while the 3-D nature of the photon migration is still retained. This therefore leads to a 2-D inverse issue with the matched 3-D forward model. Simulation results show that comparing to the 3-D reconstruction algorithm, the Semi-3D reconstruction algorithm cut 27% the calculation time consumption.
3-D dynamic rupture simulations by a finite volume method
NASA Astrophysics Data System (ADS)
Benjemaa, M.; Glinsky-Olivier, N.; Cruz-Atienza, V. M.; Virieux, J.
2009-07-01
Dynamic rupture of a 3-D spontaneous crack of arbitrary shape is investigated using a finite volume (FV) approach. The full domain is decomposed in tetrahedra whereas the surface, on which the rupture takes place, is discretized with triangles that are faces of tetrahedra. First of all, the elastodynamic equations are described into a pseudo-conservative form for an easy application of the FV discretization. Explicit boundary conditions are given using criteria based on the conservation of discrete energy through the crack surface. Using a stress-threshold criterion, these conditions specify fluxes through those triangles that have suffered rupture. On these broken surfaces, stress follows a linear slip-weakening law, although other friction laws can be implemented. For The Problem Version 3 of the dynamic-rupture code verification exercise conducted by the SCEC/USGS, numerical solutions on a planar fault exhibit a very high convergence rate and are in good agreement with the reference one provided by a finite difference (FD) technique. For a non-planar fault of parabolic shape, numerical solutions agree satisfactorily well with those obtained with a semi-analytical boundary integral method in terms of shear stress amplitudes, stopping phases arrival times and stress overshoots. Differences between solutions are attributed to the low-order interpolation of the FV approach, whose results are particularly sensitive to the mesh regularity (structured/unstructured). We expect this method, which is well adapted for multiprocessor parallel computing, to be competitive with others for solving large scale dynamic ruptures scenarios of seismic sources in the near future.
A simple approach for 3D reconstruction of the spine from biplanar radiography
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Shi, Xinling; Lv, Liang; Guo, Fei; Zhang, Yufeng
2014-04-01
This paper proposed a simple approach for 3D spinal reconstruction from biplanar radiography. The proposed reconstruction consisted in reconstructing the 3D central curve of the spine based on the epipolar geometry and automatically aligning vertebrae under the constraint of this curve. The vertebral orientations were adjusted by matching the projections of the 3D pedicles with the 2D pedicles in biplanar radiographs. The user interaction time was within one minute for a thoracic spine. Sixteen pairs of radiographs of a thoracic spinal model were used to evaluate the precision and accuracy. The precision was within 3.1 mm for the location and 3.5° for the orientation. The accuracy was within 3.5 mm for the location and 3.9° for the orientation. These results demonstrate that this approach can be a promising tool to obtain the 3D spinal geometry with acceptable user interactions in scoliotic clinics.
3D reconstruction of a human heart fascicle using SurfDriver
NASA Astrophysics Data System (ADS)
Rader, Robert J.; Phillips, Steven J.; LaFollette, Paul S., Jr.
2000-06-01
The Temple University Medical School has a sequence of over 400 serial sections of adult normal ventricular human heart tissue, cut at 25 micrometer thickness. We used a Zeiss Ultraphot with a 4x planapo objective and a Pixera digital camera to make a series of 45 sequential montages to use in the 3D reconstruction of a fascicle (muscle bundle). We wrote custom software to merge 4 smaller image fields from each section into one composite image. We used SurfDriver software, developed by Scott Lozanoff of the University of Hawaii and David Moody of the University of Alberta, for registration, object boundary identification, and 3D surface reconstruction. We used an Epson Stylus Color 900 printer to get photo-quality prints. We describe the challenge and our solution to the following problems: image acquisition and digitization, image merge, alignment and registration, boundary identification, 3D surface reconstruction, 3D visualization and orientation, snapshot, and photo-quality prints.
Web-based volume slicer for 3D electron-microscopy data from EMDB
Salavert-Torres, José; Iudin, Andrii; Lagerstedt, Ingvar; Sanz-García, Eduardo; Kleywegt, Gerard J.; Patwardhan, Ardan
2016-01-01
We describe the functionality and design of the Volume slicer – a web-based slice viewer for EMDB entries. This tool uniquely provides the facility to view slices from 3D EM reconstructions along the three orthogonal axes and to rapidly switch between them and navigate through the volume. We have employed multiple rounds of user-experience testing with members of the EM community to ensure that the interface is easy and intuitive to use and the information provided is relevant. The impetus to develop the Volume slicer has been calls from the EM community to provide web-based interactive visualisation of 2D slice data. This would be useful for quick initial checks of the quality of a reconstruction. Again in response to calls from the community, we plan to further develop the Volume slicer into a fully-fledged Volume browser that provides integrated visualisation of EMDB and PDB entries from the molecular to the cellular scale. PMID:26876163
A fast 3D reconstruction system with a low-cost camera accessory
Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.
2015-01-01
Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407
A fast 3D reconstruction system with a low-cost camera accessory
NASA Astrophysics Data System (ADS)
Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.
2015-06-01
Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
Analysis of method of 3D shape reconstruction using scanning deflectometry
NASA Astrophysics Data System (ADS)
Novák, Jiří; Novák, Pavel; Mikš, Antonín.
2013-04-01
This work presents a scanning deflectometric approach to solving a 3D surface reconstruction problem, which is based on measurements of a surface gradient of optically smooth surfaces. It is shown that a description of this problem leads to a nonlinear partial differential equation (PDE) of the first order, from which the surface shape can be reconstructed numerically. The method for effective finding of the solution of this differential equation is proposed, which is based on the transform of the problem of PDE solving to the optimization problem. We describe different types of surface description for the shape reconstruction and a numerical simulation of the presented method is performed. The reconstruction process is analyzed by computer simulations and presented on examples. The performed analysis confirms a robustness of the reconstruction method and a good possibility for measurements and reconstruction of the 3D shape of specular surfaces.
Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction
NASA Astrophysics Data System (ADS)
Yu, Qian; Helmholz, Petra; Belton, David
2016-06-01
In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.
3D reconstruction of a building from LIDAR data with first-and-last echo information
NASA Astrophysics Data System (ADS)
Zhang, Guoning; Zhang, Jixian; Yu, Jie; Yang, Haiquan; Tan, Ming
2007-11-01
With the aerial LIDAR technology developing, how to automatically recognize and reconstruct the buildings from LIDAR dataset is an important research topic along with the widespread applications of LIDAR data in city modeling, urban planning, etc.. Applying the information of the first-and-last echo data of the same laser point, in this paper, a scheme of 3D-reconstruction of simple building has been presented, which mainly include the following steps: the recognition of non-boundary building points and boundary building points and the generation of each building-point-cluster; the localization of the boundary of each building; the detection of the planes included in each cluster and the reconstruction of building in 3D form. Through experiment, it can be proved that for the LIDAR data with first-and-last echo information the scheme can effectively and efficiently 3D-reconstruct simple buildings, such as flat and gabled buildings.
NASA Astrophysics Data System (ADS)
Sijbers, Jan; Van der Linden, Anne-Marie; Scheunders, Paul; Van Audekerke, Johan; Van Dyck, Dirk; Raman, Erik R.
1996-04-01
The aim of this work is the development of a non-invasive technique for efficient and accurate volume quantization of the cerebellum of mice. This enables an in-vivo study on the development of the cerebellum in order to define possible alterations in cerebellum volume of transgenic mice. We concentrate on a semi-automatic segmentation procedure to extract the cerebellum from 3D magnetic resonance data. The proposed technique uses a 3D variant of Vincent and Soille's immersion based watershed algorithm which is applied to the gradient magnitude of the MR data. The algorithm results in a partitioning of the data in volume primitives. The known drawback of the watershed algorithm, over-segmentation, is strongly reduced by a priori application of an adaptive anisotropic diffusion filter on the gradient magnitude data. In addition, over-segmentation is a posteriori contingently reduced by properly merging volume primitives, based on the minimum description length principle. The outcome of the preceding image processing step is presented to the user for manual segmentation. The first slice which contains the object of interest is quickly segmented by the user through selection of basic image regions. In the sequel, the subsequent slices are automatically segmented. The segmentation results are contingently manually corrected. The technique is tested on phantom objects, where segmentation errors less than 2% were observed. Three-dimensional reconstructions of the segmented data are shown for the mouse cerebellum and the mouse brains in toto.
NASA Astrophysics Data System (ADS)
Liao, Rui; Xu, Ning; Sun, Yiyong
2008-03-01
Presentation of detailed anatomical structures via 3D Computed Tomographic (CT) volumes helps visualization and navigation in electrophysiology procedures (EP). Registration of the CT volume with the online fluoroscopy however is a challenging task for EP applications due to the lack of discernable features in fluoroscopic images. In this paper, we propose to use the coronary sinus (CS) catheter in bi-plane fluoroscopic images and the coronary sinus in the CT volume as a location constraint to accomplish 2D-3D registration. Two automatic registration algorithms are proposed in this study, and their performances are investigated on both simulated and real data. It is shown that compared to registration using mono-plane fluoroscopy, registration using bi-plane images results in substantially higher accuracy in 3D and enhanced robustness. In addition, compared to registering the projection of CS to the 2D CS catheter, it is more desirable to reconstruct a 3D CS catheter from the bi-plane fluoroscopy and then perform a 3D-3D registration between the CS and the reconstructed CS catheter. Quantitative validation based on simulation and visual inspection on real data demonstrates the feasibility of the proposed workflow in EP procedures.
3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement
NASA Astrophysics Data System (ADS)
Barba, S.; Fiorillo, F.; De Feo, E.
2013-02-01
. In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.
NASA Astrophysics Data System (ADS)
Bourrion, O.; Bosson, G.; Grignon, C.; Bouly, J. L.; Richer, J. P.; Guillaudin, O.; Mayet, F.; Billard, J.; Santos, D.
2011-11-01
Directional detection of non-baryonic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A gaseous micro-TPC matrix, filled with either 3He, CF4 or C4H10 has been developed within the MIMAC project. A dedicated acquisition electronics and a real time track reconstruction software have been developed to monitor a 512 channel prototype. This auto-triggered electronic uses embedded processing to reduce the data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.
GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.
Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H
2012-09-01
Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC
Capurso, Daniel; Bengtsson, Henrik; Segal, Mark R.
2016-01-01
The spatial organization of the genome influences cellular function, notably gene regulation. Recent studies have assessed the three-dimensional (3D) co-localization of functional annotations (e.g. centromeres, long terminal repeats) using 3D genome reconstructions from Hi-C (genome-wide chromosome conformation capture) data; however, corresponding assessments for continuous functional genomic data (e.g. chromatin immunoprecipitation-sequencing (ChIP-seq) peak height) are lacking. Here, we demonstrate that applying bump hunting via the patient rule induction method (PRIM) to ChIP-seq data superposed on a Saccharomyces cerevisiae 3D genome reconstruction can discover ‘functional 3D hotspots’, regions in 3-space for which the mean ChIP-seq peak height is significantly elevated. For the transcription factor Swi6, the top hotspot by P-value contains MSB2 and ERG11 – known Swi6 target genes on different chromosomes. We verify this finding in a number of ways. First, this top hotspot is relatively stable under PRIM across parameter settings. Second, this hotspot is among the top hotspots by mean outcome identified by an alternative algorithm, k-Nearest Neighbor (k-NN) regression. Third, the distance between MSB2 and ERG11 is smaller than expected (by resampling) in two other 3D reconstructions generated via different normalization and reconstruction algorithms. This analytic approach can discover functional 3D hotspots and potentially reveal novel regulatory interactions. PMID:26869583
Impact of Level of Details in the 3d Reconstruction of Trees for Microclimate Modeling
NASA Astrophysics Data System (ADS)
Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.
2016-06-01
In the 21st century, urban areas undergo specific climatic conditions like urban heat islands which frequency and intensity increase over the years. Towards the understanding and the monitoring of these conditions, vegetation effects on urban climate are studied. It appears that a natural phenomenon, the evapotranspiration of trees, generates a cooling effect in urban environment. In this work, a 3D microclimate model is used to quantify the evapotranspiration of trees in relation with their architecture, their physiology and the climate. These three characteristics are determined with field measurements and data processing. Based on point clouds acquired with terrestrial laser scanner (TLS), the 3D reconstruction of the tree wood architecture is performed. Then the 3D reconstruction of leaves is carried out from the 3D skeleton of vegetative shoots and allometric statistics. With the aim of extending the simulation on several trees simultaneously, it is necessary to apply the 3D reconstruction process on each tree individually. However, as well for the acquisition as for the processing, the 3D reconstruction approach is time consuming. Mobile laser scanners could provide point clouds in a faster way than static TLS, but this implies a lower point density. Also the processing time could be shortened, but under the assumption that a coarser 3D model is sufficient for the simulation. In this context, the criterion of level of details and accuracy of the tree 3D reconstructed model must be studied. In this paper first tests to assess their impact on the determination of the evapotranspiration are presented.
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries
NASA Astrophysics Data System (ADS)
Izadi, Mohammad
In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.
Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging
Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.
2014-01-01
Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083
Glacial isostatic adjustment on 3-D Earth models: a finite-volume formulation
NASA Astrophysics Data System (ADS)
Latychev, Konstantin; Mitrovica, Jerry X.; Tromp, Jeroen; Tamisiea, Mark E.; Komatitsch, Dimitri; Christara, Christina C.
2005-05-01
We describe and present results from a finite-volume (FV) parallel computer code for forward modelling the Maxwell viscoelastic response of a 3-D, self-gravitating, elastically compressible Earth to an arbitrary surface load. We implement a conservative, control volume discretization of the governing equations using a tetrahedral grid in Cartesian geometry and a low-order, linear interpolation. The basic starting grid honours all major radial discontinuities in the Preliminary Reference Earth Model (PREM), and the models are permitted arbitrary spatial variations in viscosity and elastic parameters. These variations may be either continuous or discontinuous at a set of grid nodes forming a 3-D surface within the (regional or global) modelling domain. In the second part of the paper, we adopt the FV methodology and a spherically symmetric Earth model to generate a suite of predictions sampling a broad class of glacial isostatic adjustment (GIA) data types (3-D crustal motions, long-wavelength gravity anomalies). These calculations, based on either a simple disc load history or a global Late Pleistocene ice load reconstruction (ICE-3G), are benchmarked against predictions generated using the traditional normal-mode approach to GIA. The detailed comparison provides a guide for future analyses (e.g. what grid resolution is required to obtain a specific accuracy?) and it indicates that discrepancies in predictions of 3-D crustal velocities less than 0.1 mm yr-1 are generally obtainable for global grids with ~3 × 106 nodes; however, grids of higher resolution are required to predict large-amplitude (>1 cm yr-1) radial velocities in zones of peak post-glacial uplift (e.g. James bay) to the same level of absolute accuracy. We conclude the paper with a first application of the new formulation to a 3-D problem. Specifically, we consider the impact of mantle viscosity heterogeneity on predictions of present-day 3-D crustal motions in North America. In these tests, the
3D Coronal Magnetic Field Reconstruction Based on Infrared Polarimetric Observations
NASA Astrophysics Data System (ADS)
Kramar, M.; Lin, H.; Tomczyk, S.
2014-12-01
Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal phenomena at all scales. A significant progress has been recently achieved here with deployment of the Coronal Multichannel Polarimeter (CoMP) of the High Altitude Observatory (HAO). The instrument provides polarization measurements of Fe xiii 10747 A forbidden line emission. The observed polarization are the result of a line-of-sight (LOS) integration through a nonuniform temperature, density and magnetic field distribution. In order resolve the LOS problem and utilize this type of data, the vector tomography method has been developed for 3D reconstruction of the coronal magnetic field. The 3D electron density and temperature, needed as additional input, have been reconstructed by tomography method based on STEREO/EUVI data. We will present the 3D coronal magnetic field and associated 3D curl B, density, and temperature resulted from these inversions.
3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images
NASA Astrophysics Data System (ADS)
Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom
Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.
Reliable Gait Recognition Using 3D Reconstructions and Random Forests - An Anthropometric Approach.
Sandau, Martin; Heimbürger, Rikke V; Jensen, Karl E; Moeslund, Thomas B; Aanaes, Henrik; Alkjaer, Tine; Simonsen, Erik B
2016-05-01
Photogrammetric measurements of bodily dimensions and analysis of gait patterns in CCTV are important tools in forensic investigations but accurate extraction of the measurements are challenging. This study tested whether manual annotation of the joint centers on 3D reconstructions could provide reliable recognition. Sixteen participants performed normal walking where 3D reconstructions were obtained continually. Segment lengths and kinematics from the extremities were manually extracted by eight expert observers. The results showed that all the participants were recognized, assuming the same expert annotated the data. Recognition based on data annotated by different experts was less reliable achieving 72.6% correct recognitions as some parameters were heavily affected by interobserver variability. This study verified that 3D reconstructions are feasible for forensic gait analysis as an improved alternative to conventional CCTV. However, further studies are needed to account for the use of different clothing, field conditions, etc. PMID:27122399
Moriconi, S; Scalco, E; Broggi, S; Avuzzi, B; Valdagni, R; Rizzo, G
2015-08-01
A novel approach for three-dimensional (3D) surface reconstruction of anatomical structures in radiotherapy (RT) is presented. This is obtained from manual cross-sectional contours by combining both image voxel segmentation processing and implicit surface streaming methods using wavelets. 3D meshes reconstructed with the proposed approach are compared to those obtained from traditional triangulation algorithm. Qualitative and quantitative evaluations are performed in terms of mesh quality metrics. Differences in smoothness, detail and accuracy are observed in the comparison, considering three different anatomical districts and several organs at risk in radiotherapy. Overall best performances were recorded for the proposed approach, regardless the complexity of the anatomical structure. This demonstrates the efficacy of the proposed approach for the 3D surface reconstruction in radiotherapy and allows for further specific image analyses using real biomedical data. PMID:26737226
NASA Astrophysics Data System (ADS)
Reis, Sara; Eiben, Bjoern; Mertzanidou, Thomy; Hipwell, John; Hermsen, Meyke; van der Laak, Jeroen; Pinder, Sarah; Bult, Peter; Hawkes, David
2015-03-01
There is currently an increasing interest in combining the information obtained from radiology and histology with the intent of gaining a better understanding of how different tumour morphologies can lead to distinctive radiological signs which might predict overall treatment outcome. Relating information at different resolution scales is challenging. Reconstructing 3D volumes from histology images could be the key to interpreting and relating the radiological image signal to tissue microstructure. The goal of this study is to determine the minimum sampling (maximum spacing between histological sections through a fixed surgical specimen) required to create a 3D reconstruction of the specimen to a specific tolerance. We present initial results for one lumpectomy specimen case where 33 consecutive histology slides were acquired.
Full 3-D cluster-based iterative image reconstruction tool for a small animal PET camera
NASA Astrophysics Data System (ADS)
Valastyán, I.; Imrek, J.; Molnár, J.; Novák, D.; Balkay, L.; Emri, M.; Trón, L.; Bükki, T.; Kerek, A.
2007-02-01
Iterative reconstruction methods are commonly used to obtain images with high resolution and good signal-to-noise ratio in nuclear imaging. The aim of this work was to develop a scalable, fast, cluster based, fully 3-D iterative image reconstruction package for our small animal PET camera, the miniPET. The reconstruction package is developed to determine the 3-D radioactivity distribution from list mode type of data sets and it can also simulate noise-free projections of digital phantoms. We separated the system matrix generation and the fully 3-D iterative reconstruction process. As the detector geometry is fixed for a given camera, the system matrix describing this geometry is calculated only once and used for every image reconstruction, making the process much faster. The Poisson and the random noise sensitivity of the ML-EM iterative algorithm were studied for our small animal PET system with the help of the simulation and reconstruction tool. The reconstruction tool has also been tested with data collected by the miniPET from a line and a cylinder shaped phantom and also a rat.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N
2014-06-01
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)
3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2012-01-01
Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079
Manifold Based Optimization for Single-Cell 3D Genome Reconstruction
Collas, Philippe
2015-01-01
The three-dimensional (3D) structure of the genome is important for orchestration of gene expression and cell differentiation. While mapping genomes in 3D has for a long time been elusive, recent adaptations of high-throughput sequencing to chromosome conformation capture (3C) techniques, allows for genome-wide structural characterization for the first time. However, reconstruction of "consensus" 3D genomes from 3C-based data is a challenging problem, since the data are aggregated over millions of cells. Recent single-cell adaptations to the 3C-technique, however, allow for non-aggregated structural assessment of genome structure, but data suffer from sparse and noisy interaction sampling. We present a manifold based optimization (MBO) approach for the reconstruction of 3D genome structure from chromosomal contact data. We show that MBO is able to reconstruct 3D structures based on the chromosomal contacts, imposing fewer structural violations than comparable methods. Additionally, MBO is suitable for efficient high-throughput reconstruction of large systems, such as entire genomes, allowing for comparative studies of genomic structure across cell-lines and different species. PMID:26262780
3D reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2012-01-01
Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface. PMID:23223079
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
Chen, G; Pan, X; Stayman, J; Samei, E
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
NASA Astrophysics Data System (ADS)
González, C. A.; Dávila, A.; Garnica, G.
2007-09-01
Two projection systems that use an LCoS phase modulator are proposed for 3D shape reconstruction. The LCoS is used as an holographic system or as a weak phase projector, both configurations project a set of fringe patterns that are processed by the technique known as temporal phase unwrapping. To minimize the influence of camera sampling, and the speckle noise in the projected fringes, an speckle noise reduction technique is applied to the speckle patterns generated by the holographic optical system. Experiments with 3D shape reconstruction of ophthalmic mold and other testing specimens show the viability of the proposed techniques.
Hui, CheukKai; Robertson, Daniel; Beddar, Sam
2014-08-21
An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA. PMID:25054735
NASA Astrophysics Data System (ADS)
Hui, CheukKai; Robertson, Daniel; Beddar, Sam
2014-08-01
An accurate and high-resolution quality assurance (QA) method for proton radiotherapy beams is necessary to ensure correct dose delivery to the target. Detectors based on a large volume of liquid scintillator have shown great promise in providing fast and high-resolution measurements of proton treatment fields. However, previous work with these detectors has been limited to two-dimensional measurements, and the quantitative measurement of dose distributions was lacking. The purpose of the current study is to assess the feasibility of reconstructing three-dimensional (3D) scintillation light distributions of spot scanning proton beams using a scintillation system. The proposed system consists of a tank of liquid scintillator imaged by charge-coupled device cameras at three orthogonal viewing angles. Because of the limited number of viewing angles, we developed a profile-based technique to obtain an initial estimate that can improve the quality of the 3D reconstruction. We found that our proposed scintillator system and profile-based technique can reconstruct a single energy proton beam in 3D with a gamma passing rate (3%/3 mm local) of 100.0%. For a single energy layer of an intensity modulated proton therapy prostate treatment plan, the proposed method can reconstruct the 3D light distribution with a gamma pass rate (3%/3 mm local) of 99.7%. In addition, we also found that the proposed method is effective in detecting errors in the treatment plan, indicating that it can be a very useful tool for 3D proton beam QA.
Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Yu, Q.; Helmholz, P.; Belton, D.; West, G.
2014-04-01
The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.
3D face reconstruction from limited images based on differential evolution
NASA Astrophysics Data System (ADS)
Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.
2011-09-01
3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.
NASA Astrophysics Data System (ADS)
Meghoufel, Brahim
A new 3D reconstruction technique of the two adjacent structures forming the hip joint from the 3D CT-scans images has been developed. The femoral head and the acetabulum are reconstructed using a 3D multi-structure segmentation method for the adjacent surfaces which is based on the use of a 3D triangular surface meshes. This method begins with a preliminary hierarchical segmentation of the two structures, using one triangular mesh for each structure. The two resulting 3D meshes of the hierarchical segmentation are deployed into two planar 2D surfaces. We have used the umbrella deployment to deploy the femoral head mesh, and the parameterization 3D/2D to deploy the acetabulum mesh. The two planar generated surfaces are used to deploy the CT-scan volume around each structure. The surface of each structure is nearly planar in the corresponding deployed volume. The iterative method of minimal surfaces ensures the optimal identification of both sought surfaces from the deployed volumes. The last step of the 3D reconstruction method aims at detecting and correcting the overlap between the two structures. This 3D reconstruction method has been validated using a data base of 10 3D CT-scan images. The results of the 3D reconstructions seem satisfactory. The precision errors of these 3D reconstructions have been quantified by comparing the 3D reconstructions with an available manual gold standard. The errors resulting from the quantification are better than those available in the literature; the mean of those errors is 0,83 +/- 0,25 mm for acetabulum and 0,70 +/- 0,17 mm for the femoral head. The mean execution time of the 3D reconstruction of the two structures forming the hip joint has been estimated at approximately 3,0 +/- 0,3 min . The proposed method shows the potential of the solution which the image processing can provide to the surgeons in order to achieve their routine tasks. Such a method can be applied to every imaging modality.
Technology Transfer Automated Retrieval System (TEKTRAN)
Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...
Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.
2016-04-01
Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. PMID:22727689
Reconstruction of 3D ultrasound images based on Cyclic Regularized Savitzky-Golay filters.
Toonkum, Pollakrit; Suwanwela, Nijasri C; Chinrungrueng, Chedsada
2011-02-01
This paper presents a new three-dimensional (3D) ultrasound reconstruction algorithm for generation of 3D images from a series of two-dimensional (2D) B-scans acquired in the mechanical linear scanning framework. Unlike most existing 3D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the Cyclic Regularized Savitzky-Golay (CRSG) filter, is a new variant of the Savitzky-Golay (SG) smoothing filter. The CRSG filter has been improved upon the original SG filter in two respects: First, the cyclic indicator function has been incorporated into the least square cost function to enable the CRSG filter to approximate nonuniformly spaced data of the unobserved image intensities contained in unfilled voxels and reduce speckle noise of the observed image intensities contained in filled voxels. Second, the regularization function has been augmented to the least squares cost function as a mechanism to balance between the degree of speckle reduction and the degree of detail preservation. The CRSG filter has been evaluated and compared with the Voxel Nearest-Neighbor (VNN) interpolation post-processed by the Adaptive Speckle Reduction (ASR) filter, the VNN interpolation post-processed by the Adaptive Weighted Median (AWM) filter, the Distance-Weighted (DW) interpolation, and the Adaptive Distance-Weighted (ADW) interpolation, on reconstructing a synthetic 3D spherical image and a clinical 3D carotid artery bifurcation in the mechanical linear scanning framework. This preliminary evaluation indicates that the CRSG filter is more effective in both speckle reduction and geometric reconstruction of 3D ultrasound images than the other methods. PMID:20696448
The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations
NASA Astrophysics Data System (ADS)
Ben Hassen, M. F.; Erhard, K.; Potthast, R.
2006-02-01
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
NASA Astrophysics Data System (ADS)
Gunga, Hanns-Christian; Suthau, Tim; Bellmann, Anke; Friedrich, Andreas; Schwanebeck, Thomas; Stoinski, Stefan; Trippel, Tobias; Kirsch, Karl; Hellwich, Olaf
2007-08-01
Both body mass and surface area are factors determining the essence of any living organism. This should also hold true for an extinct organism such as a dinosaur. The present report discusses the use of a new 3D laser scanner method to establish body masses and surface areas of an Asian elephant (Zoological Museum of Copenhagen, Denmark) and of Plateosaurus engelhardti, a prosauropod from the Upper Triassic, exhibited at the Paleontological Museum in Tübingen (Germany). This method was used to study the effect that slight changes in body shape had on body mass for P. engelhardti. It was established that body volumes varied between 0.79 m3 (slim version) and 1.14 m3 (robust version), resulting in a presumable body mass of 630 and 912 kg, respectively. The total body surface areas ranged between 8.8 and 10.2 m2, of which, in both reconstructions of P. engelhardti, ˜33% account for the thorax area alone. The main difference between the two models is in the tail and hind limb reconstruction. The tail of the slim version has a surface area of 1.98 m2, whereas that of the robust version has a surface area of 2.73 m2. The body volumes calculated for the slim version were as follows: head 0.006 m3, neck 0.016 m3, fore limbs 0.020 m3, hind limbs 0.08 m3, thoracic cavity 0.533 m3, and tail 0.136 m3. For the robust model, the following volumes were established: 0.01 m3 head, neck 0.026 m3, fore limbs 0.025 m3, hind limbs 0.18 m3, thoracic cavity 0.616 m3, and finally, tail 0.28 m3. Based on these body volumes, scaling equations were used to assess the size that the organs of this extinct dinosaur have.
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software
NASA Astrophysics Data System (ADS)
Roşu, Şerban; Ianeş, Emilia; Roşu, Doina
2010-09-01
This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.
Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee
2015-12-01
3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope. PMID:26498516
Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor
El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice
2015-01-01
In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874
A preliminary investigation of 3D preconditioned conjugate gradient reconstruction for cone-beam CT
NASA Astrophysics Data System (ADS)
Fu, Lin; De Man, Bruno; Zeng, Kai; Benson, Thomas M.; Yu, Zhou; Cao, Guangzhi; Thibault, Jean-Baptiste
2012-03-01
Model-based iterative reconstruction (MBIR) methods based on maximum a posteriori (MAP) estimation have been recently introduced to multi-slice CT scanners. The model-based approach has shown promising image quality improvement with reduced radiation dose compared to conventional FBP methods, but the associated high computation cost limits its widespread use in clinical environments. Among the various choices of numerical algorithms to optimize the MAP cost function, simultaneous update methods such as the conjugate gradient (CG) method have a relatively high level of parallelism to take full advantage of a new generation of many-core computing hardware. With proper preconditioning techniques, fast convergence speeds of CG algorithms have been demonstrated in 3D emission and 2D transmission reconstruction. However, 3D transmission reconstruction using preconditioned conjugate gradient (PCG) has not been reported. Additional challenges in applying PCG in 3D CT reconstruction include the large size of clinical CT data, shift-variant and incomplete sampling, and complex regularization schemes to meet the diagnostic standard of image quality. In this paper, we present a ramp-filter based PCG algorithm for 3D CT MBIR. Convergence speeds of algorithms with and without using the preconditioner are compared.
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali
2015-01-13
In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 512{sup 3} to 8192{sup 3} voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and H{sup t} (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume 'Shepp and Logan' in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections.