Iterative feature refinement for accurate undersampled MR image reconstruction.
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches. PMID:27032527
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Accurate Sparse-Projection Image Reconstruction via Nonlocal TV Regularization
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better. PMID:24592168
Accurate reconstruction of hyperspectral images from compressive sensing measurements
NASA Astrophysics Data System (ADS)
Greer, John B.; Flake, J. C.
2013-05-01
The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Subramanian, K R; Thubrikar, M J; Fowler, B; Mostafavi, M T; Funk, M W
2000-01-01
We present a technique that accurately reconstructs complex three dimensional blood vessel geometry from 2D intravascular ultrasound (IVUS) images. Biplane x-ray fluoroscopy is used to image the ultrasound catheter tip at a few key points along its path as the catheter is pulled through the blood vessel. An interpolating spline describes the continuous catheter path. The IVUS images are located orthogonal to the path, resulting in a non-uniform structured scalar volume of echo densities. Isocontour surfaces are used to view the vessel geometry, while transparency and clipping enable interactive exploration of interior structures. The two geometries studied are a bovine artery vascular graft having U-shape and a constriction, and a canine carotid artery having multiple branches and a constriction. Accuracy of the reconstructions is established by comparing the reconstructions to (1) silicone moulds of the vessel interior, (2) biplane x-ray images, and (3) the original echo images. Excellent shape and geometry correspondence was observed in both geometries. Quantitative measurements made at key locations of the 3D reconstructions also were in good agreement with those made in silicone moulds. The proposed technique is easily adoptable in clinical practice, since it uses x-rays with minimal exposure and existing IVUS technology. PMID:11105284
NASA Astrophysics Data System (ADS)
Drizdal, T.; Paulides, M. M.; Linthorst, M.; van Rhoon, G. C.
2012-05-01
In the current clinical practice, prior to superficial hyperthermia treatments (HT), temperature probes are placed in tissue to document a thermal dose. To investigate whether the painful procedure of catheter placement can be replaced by superficial HT planning, we study if the specific absorption rate (SAR) coverage is predictive for treatment outcome. An absolute requirement for such a study is the accurate reconstruction of the applicator setup. The purpose of this study was to investigate the feasibility of the applicator setup reconstruction from multiple-view images. The accuracy of the multiple-view reconstruction method has been assessed for two experimental setups using six lucite cone applicators (LCAs) representing the largest array applied at our clinic and also the most difficult scenario for the reconstruction. For the two experimental setups and 112 distances, the mean difference between photogrametry reconstructed and manually measured distances was 0.25 ± 0.79 mm (mean±1 standard deviation). By a parameter study of translation T (mm) and rotation R (°) of LCAs, we showed that these inaccuracies are clinically acceptable, i.e. they are either from ±1.02 mm error in translation or ±0.48° in rotation, or combinations expressed by 4.35R2 + 0.97T2 = 1. We anticipate that such small errors will not have a relevant influence on the SAR distribution in the treated region. The clinical applicability of the procedure is shown on a patient with a breast cancer recurrence treated with reirradiation plus superficial hyperthermia using the six-LCA array. The total reconstruction procedure of six LCAs from a set of ten photos currently takes around 1.5 h. We conclude that the reconstruction of superficial HT setup from multiple-view images is feasible and only minor errors are found that will have a negligible influence on treatment planning quality.
A Method for Accurate Reconstructions of the Upper Airway Using Magnetic Resonance Images
Xiong, Huahui; Huang, Xiaoqing; Li, Yong; Li, Jianhong; Xian, Junfang; Huang, Yaqi
2015-01-01
Objective The purpose of this study is to provide an optimized method to reconstruct the structure of the upper airway (UA) based on magnetic resonance imaging (MRI) that can faithfully show the anatomical structure with a smooth surface without artificial modifications. Methods MRI was performed on the head and neck of a healthy young male participant in the axial, coronal and sagittal planes to acquire images of the UA. The level set method was used to segment the boundary of the UA. The boundaries in the three scanning planes were registered according to the positions of crossing points and anatomical characteristics using a Matlab program. Finally, the three-dimensional (3D) NURBS (Non-Uniform Rational B-Splines) surface of the UA was constructed using the registered boundaries in all three different planes. Results A smooth 3D structure of the UA was constructed, which captured the anatomical features from the three anatomical planes, particularly the location of the anterior wall of the nasopharynx. The volume and area of every cross section of the UA can be calculated from the constructed 3D model of UA. Conclusions A complete scheme of reconstruction of the UA was proposed, which can be used to measure and evaluate the 3D upper airway accurately. PMID:26066461
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging
Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura
2014-07-15
Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase
In Situ Casting and Imaging of the Rat Airway Tree for Accurate 3D Reconstruction
Jacob, Rick E.; Colby, Sean M.; Kabilan, Senthil; Einstein, Daniel R.; Carson, James P.
2013-08-01
The use of anatomically accurate, animal-specific airway geometries is important for understanding and modeling the physiology of the respiratory system. One approach for acquiring detailed airway architecture is to create a bronchial cast of the conducting airways. However, typical casting procedures either do not faithfully preserve the in vivo branching angles, or produce rigid casts that when removed for imaging are fragile and thus easily damaged. We address these problems by creating an in situ bronchial cast of the conducting airways in rats that can be subsequently imaged in situ using 3D micro-CT imaging. We also demonstrate that deformations in airway branch angles resulting from the casting procedure are small, and that these angle deformations can be reversed through an interactive adjustment of the segmented cast geometry. Animal work was approved by the Institutional Animal Care and Use Committee of Pacific Northwest National Laboratory.
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2014-01-01
Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J
2014-10-01
Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071
Defrise, Michel; Gullberg, Grant T.
2006-04-05
We give an overview of the role of Physics in Medicine andBiology in development of tomographic reconstruction algorithms. We focuson imaging modalities involving ionizing radiation, CT, PET and SPECT,and cover a wide spectrum of reconstruction problems, starting withclassical 2D tomogra tomography in the 1970s up to 4D and 5D problemsinvolving dynamic imaging of moving organs.
Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images.
Vandeghinste, Bert; Van Holen, Roel; Vanhove, Christian; De Vos, Filip; Vandenberghe, Stefaan; Staelens, Steven
2014-01-01
This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correction, (2) computed tomography-based attenuation correction, (3) resolution recovery, and (4) edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA) phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n = 10 kidneys) and [111In]octreotide in mice (n = 6) inoculated with a xenograft neuroendocrine tumor (NCI-H727). The measured energy resolution is 5.3% for 140.51 keV (99mTc), 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In). For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with 99mTc-dimercaptosuccinic acid ([99mTc]DMSA) is reduced from 16.2 ± 2.8% to -0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [99mTc]DMSA and [111In]octreotide. PMID:24824961
NASA Astrophysics Data System (ADS)
Oh, Jieun; Cho, Hyosung; Je, Uikyu; Lee, Minsik; Kim, Hyojeong; Hong, Daeki; Park, Yeonok; Lee, Seonhwa; Cho, Heemoon; Choi, Sungil; Koo, Yangseo
2013-03-01
In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient data. In computed tomography (CT); for example, image reconstruction from few views would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction method based on a compressed-sensing (CS) algorithm, which exploits the sparseness of the gradient image with substantially high accuracy, for accurate, low-dose dental cone-beam CT (CBCT) reconstruction. We applied the algorithm to a commercially-available dental CBCT system (Expert7™, Vatech Co., Korea) and performed experimental works to demonstrate the algorithm for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images from several undersampled data and evaluated the reconstruction quality in terms of the universal-quality index (UQI). Experimental demonstrations of the CS-based reconstruction algorithm appear to show that it can be applied to current dental CBCT systems for reducing imaging doses and improving the image quality.
NASA Astrophysics Data System (ADS)
Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He
2016-06-01
An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant
NASA Astrophysics Data System (ADS)
Park, Yeonok; Cho, Hyosung; Je, Uikyu; Hong, Daeki; Lee, Minsik; Park, Chulkyu; Cho, Heemoon; Choi, Sungil; Koo, Yangseo
2014-08-01
In practical applications of three-dimensional (3D) tomographic techniques, such as digital breast tomosynthesis (DBT), computed tomography (CT), etc., there are often challenges for accurate image reconstruction from incomplete data. In DBT, in particular, the limited-angle and few-view projection data are theoretically insufficient for exact reconstruction; thus, the use of common filtered-backprojection (FBP) algorithms leads to severe image artifacts, such as the loss of the average image value and edge sharpening. One possible approach to alleviate these artifacts may employ iterative statistical methods because they potentially yield reconstructed images that are in better accordance with the measured projection data. In this work, as another promising approach, we investigated potential applications to low-dose, accurate DBT imaging with a state-of-the-art reconstruction scheme based on compressed-sensing (CS) theory. We implemented an efficient CS-based DBT algorithm and performed systematic simulation works to investigate the imaging characteristics. We successfully obtained DBT images of substantially very high accuracy by using the algorithm and expect it to be applicable to developing the next-generation 3D breast X-ray imaging system.
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808
Overview of Image Reconstruction
Marr, R. B.
1980-04-01
Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on R^{n} is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references. (RWR)
Structured image reconstruction for three-dimensional ghost imaging lidar.
Yu, Hong; Li, Enrong; Gong, Wenlin; Han, Shensheng
2015-06-01
A structured image reconstruction method has been proposed to obtain high quality images in three-dimensional ghost imaging lidar. By considering the spatial structure relationship between recovered images of scene slices at different longitudinal distances, orthogonality constraint has been incorporated to reconstruct the three-dimensional scenes in remote sensing. Numerical simulations have been performed to demonstrate that scene slices with various sparse ratios can be recovered more accurately by applying orthogonality constraint, and the enhancement is significant especially for ghost imaging with less measurements. A simulated three-dimensional city scene has been successfully reconstructed by using structured image reconstruction in three-dimensional ghost imaging lidar. PMID:26072814
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction. PMID:26208310
LOFAR sparse image reconstruction
NASA Astrophysics Data System (ADS)
Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.
2015-03-01
Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.
Restoration and reconstruction from overlapping images
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Kaiser, Daniel J.; Hanson, Andrew L.; Li, Jing
1997-01-01
This paper describes a technique for restoring and reconstructing a scene from overlapping images. In situations where there are multiple, overlapping images of the same scene, it may be desirable to create a single image that most closely approximates the scene, based on all of the data in the available images. For example, successive swaths acquired by NASA's planned Moderate Imaging Spectrometer (MODIS) will overlap, particularly at wide scan angles, creating a severe visual artifact in the output image. Resampling the overlapping swaths to produce a more accurate image on a uniform grid requires restoration and reconstruction. The one-pass restoration and reconstruction technique developed in this paper yields mean-square-optimal resampling, based on a comprehensive end-to-end system model that accounts for image overlap, and subject to user-defined and data-availability constraints on the spatial support of the filter.
Spectral image reconstruction through the PCA transform
NASA Astrophysics Data System (ADS)
Ma, Long; Qiu, Xuewei; Cong, Yangming
2015-12-01
Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.
Bayesian image reconstruction in astronomy
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1990-09-01
This paper presents the development and testing of a new iterative reconstruction algorithm for astronomy. A maximum a posteriori method of image reconstruction in the Bayesian statistical framework is proposed for the Poisson-noise case. The method uses the entropy with an adjustable 'sharpness parameter' to define the prior probability and the likelihood with 'data increment' parameters to define the conditional probability. The method makes it possible to obtain reconstructions with neither the problem of the 'grey' reconstructions associated with the pure Bayesian reconstructions nor the problem of image deterioration, typical of the maximum-likelihood method. The present iterative algorithm is fast and stable, maintains positivity, and converges to feasible images.
Image Contrast in Holographic Reconstructions
ERIC Educational Resources Information Center
Russell, B. R.
1969-01-01
The fundamental concepts of holography are explained using elementary wave ideas. Discusses wavefront reconstruction and contrast in hemigraphic images. The consequence of recording only the intensity at a given surface and using an oblique reference wave is shown to be an incomplete reconstruction resulting in image of low contrast. (LC)
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
Automated End-to-End Workflow for Precise and Geo-accurate Reconstructions using Fiducial Markers
NASA Astrophysics Data System (ADS)
Rumpler, M.; Daftry, S.; Tscharf, A.; Prettenthaler, R.; Hoppe, C.; Mayer, G.; Bischof, H.
2014-08-01
Photogrammetric computer vision systems have been well established in many scientific and commercial fields during the last decades. Recent developments in image-based 3D reconstruction systems in conjunction with the availability of affordable high quality digital consumer grade cameras have resulted in an easy way of creating visually appealing 3D models. However, many of these methods require manual steps in the processing chain and for many photogrammetric applications such as mapping, recurrent topographic surveys or architectural and archaeological 3D documentations, high accuracy in a geo-coordinate system is required which often cannot be guaranteed. Hence, in this paper we present and advocate a fully automated end-to-end workflow for precise and geoaccurate 3D reconstructions using fiducial markers. We integrate an automatic camera calibration and georeferencing method into our image-based reconstruction pipeline based on binary-coded fiducial markers as artificial, individually identifiable landmarks in the scene. Additionally, we facilitate the use of these markers in conjunction with known ground control points (GCP) in the bundle adjustment, and use an online feedback method that allows assessment of the final reconstruction quality in terms of image overlap, ground sampling distance (GSD) and completeness, and thus provides flexibility to adopt the image acquisition strategy already during image recording. An extensive set of experiments is presented which demonstrate the accuracy benefits to obtain a highly accurate and geographically aligned reconstruction with an absolute point position uncertainty of about 1.5 times the ground sampling distance.
Geometric reconstruction using tracked ultrasound strain imaging
NASA Astrophysics Data System (ADS)
Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.
2013-03-01
The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.
Multi-contrast magnetic resonance image reconstruction
NASA Astrophysics Data System (ADS)
Liu, Meng; Chen, Yunmei; Zhang, Hao; Huang, Feng
2015-03-01
In clinical exams, multi-contrast images from conventional MRI are scanned with the same field of view (FOV) for complementary diagnostic information, such as proton density- (PD-), T1- and T2-weighted images. Their sharable information can be utilized for more robust and accurate image reconstruction. In this work, we propose a novel model and an efficient algorithm for joint image reconstruction and coil sensitivity estimation in multi-contrast partially parallel imaging (PPI) in MRI. Our algorithm restores the multi-contrast images by minimizing an energy function consisting of an L2-norm fidelity term to reduce construction errors caused by motion, a regularization term of underlying images to preserve common anatomical features by using vectorial total variation (VTV) regularizer, and updating sensitivity maps by Tikhonov smoothness based on their physical property. We present the numerical results including T1- and T2-weighted MR images recovered from partially scanned k-space data and provide the comparisons between our results and those obtained from the related existing works. Our numerical results indicate that the proposed method using vectorial TV and penalties on sensitivities can be made promising and widely used for multi-contrast multi-channel MR image reconstruction.
Accurate and efficient reconstruction of deep phylogenies from structured RNAs
Stocsits, Roman R.; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F.
2009-01-01
Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687
Accurate and efficient reconstruction of deep phylogenies from structured RNAs.
Stocsits, Roman R; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F
2009-10-01
Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687
Reconstruction techniques for optoacoustic imaging
NASA Astrophysics Data System (ADS)
Frenz, Martin; Koestli, Kornel P.; Paltauf, Guenther; Schmidt-Kloiber, Heinz; Weber, Heinz P.
2001-06-01
Optoacoustics is a method to gain information from inside a tissue. This is done by irradiating a tissue with a short light pulse, which generates a pressure distribution inside the tissue that mirrors the absorber distribution. The pressure distribution measured on the tissue-surface allows, by applying a back-projection method, to calculate a tomography image of the absorber distribution. This study presents a novel computational algorithm based on Fourier transform, which, at least in principle, yields an exact 3D reconstruction of the distribution of absorbed energy density inside turbid media. The reconstruction is based on 2D pressure distributions captured outside at different times. The FFT reconstruction algorithm is first tested in the back projection of simulated pressure transients of small model absorbers, and finally applied to reconstruct the distribution of artificial blood vessels in three dimensions.
Image processing and reconstruction
Chartrand, Rick
2012-06-15
This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.
Towards an accurate volume reconstruction in atom probe tomography.
Beinke, Daniel; Oberdorfer, Christian; Schmitz, Guido
2016-06-01
An alternative concept for the reconstruction of atom probe data is outlined. It is based on the calculation of realistic trajectories of the evaporated ions in a recursive refinement process. To this end, the electrostatic problem is solved on a Delaunay tessellation. To enable the trajectory calculation, the order of reconstruction is inverted with respect to previous reconstruction schemes: the last atom detected is reconstructed first. In this way, the emitter shape, which controls the trajectory, can be defined throughout the duration of the reconstruction. A proof of concept is presented for 3D model tips, containing spherical precipitates or embedded layers of strongly contrasting evaporation thresholds. While the traditional method following Bas et al. generates serious distortions in these cases, a reconstruction with the proposed electrostatically informed approach improves the geometry of layers and particles significantly. PMID:27062338
Cerec: correlation, an accurate and practical method for occlusal reconstruction.
Prévost, A P; Bouchard, Y
2001-07-01
The correlation technique explained here shows one of the possibilities for occlusal reconstruction offered by the Cerec approach. The various stages of this technique are described and illustrated. The most current applications are reviewed. PMID:11862885
Filtering in SPECT Image Reconstruction
Lyra, Maria; Ploussi, Agapi
2011-01-01
Single photon emission computed tomography (SPECT) imaging is widely implemented in nuclear medicine as its clinical role in the diagnosis and management of several diseases is, many times, very helpful (e.g., myocardium perfusion imaging). The quality of SPECT images are degraded by several factors such as noise because of the limited number of counts, attenuation, or scatter of photons. Image filtering is necessary to compensate these effects and, therefore, to improve image quality. The goal of filtering in tomographic images is to suppress statistical noise and simultaneously to preserve spatial resolution and contrast. The aim of this work is to describe the most widely used filters in SPECT applications and how these affect the image quality. The choice of the filter type, the cut-off frequency and the order is a major problem in clinical routine. In many clinical cases, information for specific parameters is not provided, and findings cannot be extrapolated to other similar SPECT imaging applications. A literature review for the determination of the mostly used filters in cardiac, brain, bone, liver, kidneys, and thyroid applications is also presented. As resulting from the overview, no filter is perfect, and the selection of the proper filters, most of the times, is done empirically. The standardization of image-processing results may limit the filter types for each SPECT examination to certain few filters and some of their parameters. Standardization, also, helps in reducing image processing time, as the filters and their parameters must be standardised before being put to clinical use. Commercial reconstruction software selections lead to comparable results interdepartmentally. The manufacturers normally supply default filters/parameters, but these may not be relevant in various clinical situations. After proper standardisation, it is possible to use many suitable filters or one optimal filter. PMID:21760768
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
Synergistic image reconstruction for hybrid ultrasound and photoacoustic computed tomography
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Wang, Kun; Wang, Lihong V.; Anastasio, Mark A.
2015-03-01
Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Super-Resolution Image Reconstruction Using Diffuse Source Models
Ellis, Michael A.; Viola, Francesco; Walker, William F.
2010-01-01
Image reconstruction is central to many scientific fields, from medical ultrasound and sonar to computed tomography and computer vision. While lenses play a critical reconstruction role in these fields, digital sensors enable more sophisticated computational approaches. A variety of computational methods have thus been developed, with the common goal of increasing contrast and resolution to extract the greatest possible information from raw data. This paper describes a new image reconstruction method named the Diffuse Time-domain Optimized Near-field Estimator (dTONE). dTONE represents each hypothetical target in the system model as a diffuse region of targets rather than a single discrete target, which more accurately represents the experimental data that arise from signal sources in continuous space, with no additional computational requirements at the time of image reconstruction. Simulation and experimental ultrasound images of animal tissues show that dTONE achieves image resolution and contrast far superior to those of conventional image reconstruction methods. We also demonstrate the increased robustness of the diffuse target model to major sources of image degradation, through the addition of electronic noise, phase aberration, and magnitude aberration to ultrasound simulations. Using experimental ultrasound data from a tissue-mimicking phantom containing a 3 mm diameter anechoic cyst, the conventionally reconstructed image has a cystic contrast of −6.3 dB whereas the dTONE image has a cystic contrast of −14.4 dB. PMID:20447760
Super-resolution image reconstruction using diffuse source models.
Ellis, Michael A; Viola, Francesco; Walker, William F
2010-06-01
Image reconstruction is central to many scientific fields, from medical ultrasound and sonar to computed tomography and computer vision. Although lenses play a critical reconstruction role in these fields, digital sensors enable more sophisticated computational approaches. A variety of computational methods have thus been developed, with the common goal of increasing contrast and resolution to extract the greatest possible information from raw data. This paper describes a new image reconstruction method named the Diffuse Time-domain Optimized Near-field Estimator (dTONE). dTONE represents each hypothetical target in the system model as a diffuse region of targets rather than a single discrete target, which more accurately represents the experimental data that arise from signal sources in continuous space, with no additional computational requirements at the time of image reconstruction. Simulation and experimental ultrasound images of animal tissues show that dTONE achieves image resolution and contrast far superior to those of conventional image reconstruction methods. We also demonstrate the increased robustness of the diffuse target model to major sources of image degradation through the addition of electronic noise, phase aberration and magnitude aberration to ultrasound simulations. Using experimental ultrasound data from a tissue-mimicking phantom containing a 3-mm-diameter anechoic cyst, the conventionally reconstructed image has a cystic contrast of -6.3 dB, whereas the dTONE image has a cystic contrast of -14.4 dB. PMID:20447760
Accurate reconstruction of insertion-deletion histories by statistical phylogenetics.
Westesson, Oscar; Lunter, Gerton; Paten, Benedict; Holmes, Ian
2012-01-01
The Multiple Sequence Alignment (MSA) is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history), it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm) to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes. PMID:22536326
Multiscale likelihood analysis and image reconstruction
NASA Astrophysics Data System (ADS)
Willett, Rebecca M.; Nowak, Robert D.
2003-11-01
The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
Prospective regularization design in prior-image-based reconstruction.
Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster
2015-12-21
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
Digital holographic method for tomography-image reconstruction
NASA Astrophysics Data System (ADS)
Liu, Cheng; Yan, Changchun; Gao, Shumei
2004-02-01
A digital holographic method for three-dimensional reconstruction of tomography images is demonstrated theoretically and experimentally. In this proposed method, a numerical hologram is first computed by calculating the total diffraction field of all transect images of a detected organ. Then, the numerical hologram is transferred to the usual recording medium to generate a physical hologram. Last, all the transect images are reconstructed in their original position by illuminating the physical hologram with a laser, thereby forming a three-dimensional transparent image of the organ detected. Due to its true third dimension, the reconstructed image using this method is much more vivid and accurate than that of other methods. Potentially, it may have great prospects for application in medical engineering.
Image reconstruction in transcranial photoacoustic computed tomography of the brain
NASA Astrophysics Data System (ADS)
Mitsuhashi, Kenji; Wang, Lihong V.; Anastasio, Mark A.
2015-03-01
Photoacoustic computed tomography (PACT) holds great promise for transcranial brain imaging. However, the strong reflection, scattering, attenuation, and mode-conversion of photoacoustic waves in the skull pose serious challenges to establishing the method. The lack of an appropriate model of solid media in conventional PACT imaging models, which are based on the canonical scalar wave equation, causes a significant model mismatch in the presence of the skull and thus results in deteriorated reconstructed images. The goal of this study was to develop an image reconstruction algorithm that accurately models the skull and thereby ameliorates the quality of reconstructed images. The propagation of photoacoustic waves through the skull was modeled by a viscoelastic stress tensor wave equation, which was subsequently discretized by use of a staggered grid fourth-order finite-difference time-domain (FDTD) method. The matched adjoint of the FDTD-based wave propagation operator was derived for implementing a back-projection operator. Systematic computer simulations were conducted to demonstrate the effectiveness of the back-projection operator for reconstructing images in a realistic three-dimensional PACT brain imaging system. The results suggest that the proposed algorithm can successfully reconstruct images from transcranially-measured pressure data and readily be translated to clinical PACT brain imaging applications.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Digital Three-dimensional Reconstruction Based On Integral Imaging
Li, Chao; Chen, Qian; Hua, Hong; Mao, Chen; Shao, Ajun
2015-01-01
This paper presents a digital three dimensional reconstruction method based on a set of small-baseline elemental images captured with a micro-lens array and a CCD sensor. In this paper, we adopt the ASIFT (Affine Scale-invariant feature transform) operator as the image registration method. Among the set of captured elemental images, the elemental image located in the middle of the overall image field is used as the reference and corresponding matching points in each elemental image around the reference elemental are calculated, which enables to accurately compute the depth value of object points relatively to the reference image frame. Using optimization algorithm with redundant matching points can achieve 3D reconstruction finally. Our experimental results are presented to demonstrate excellent performance in accuracy and speed of the proposed algorithm. PMID:26236151
Padhi, Shantanu K.; Howard, John
2013-01-01
Nonlinear microwave imaging heavily relies on an accurate numerical electromagnetic model of the antenna system. The model is used to simulate scattering data that is compared to its measured counterpart in order to reconstruct the image. In this paper an antenna system immersed in water is used to image different canonical objects in order to investigate the implication of modeling errors on the final reconstruction using a time domain-based iterative inverse reconstruction algorithm and three-dimensional FDTD modeling. With the test objects immersed in a background of air and tap water, respectively, we have studied the impact of antenna modeling errors, errors in the modeling of the background media, and made a comparison with a two-dimensional version of the algorithm. In conclusion even small modeling errors in the antennas can significantly alter the reconstructed image. Since the image reconstruction procedure is highly nonlinear general conclusions are very difficult to make. In our case it means that with the antenna system immersed in water and using our present FDTD-based electromagnetic model the imaging results are improved if refraining from modeling the water-wall-air interface and instead just use a homogeneous background of water in the model. PMID:23606825
Joint image reconstruction and sensitivity estimation in SENSE (JSENSE).
Ying, Leslie; Sheng, Jinhua
2007-06-01
Parallel magnetic resonance imaging (pMRI) using multichannel receiver coils has emerged as an effective tool to reduce imaging time in various applications. However, the issue of accurate estimation of coil sensitivities has not been fully addressed, which limits the level of speed enhancement achievable with the technology. The self-calibrating (SC) technique for sensitivity extraction has been well accepted, especially for dynamic imaging, and complements the common calibration technique that uses a separate scan. However, the existing method to extract the sensitivity information from the SC data is not accurate enough when the number of data is small, and thus erroneous sensitivities affect the reconstruction quality when they are directly applied to the reconstruction equation. This paper considers this problem of error propagation in the sequential procedure of sensitivity estimation followed by image reconstruction in existing methods, such as sensitivity encoding (SENSE) and simultaneous acquisition of spatial harmonics (SMASH), and reformulates the image reconstruction problem as a joint estimation of the coil sensitivities and the desired image, which is solved by an iterative optimization algorithm. The proposed method was tested on various data sets. The results from a set of in vivo data are shown to demonstrate the effectiveness of the proposed method, especially when a rather large net acceleration factor is used. PMID:17534910
Image Reconstruction Using Analysis Model Prior.
Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping
2016-01-01
The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171
Image Reconstruction Using Analysis Model Prior
Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping
2016-01-01
The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Information Propagation in Prior-Image-Based Reconstruction
Stayman, J. Webster; Prince, Jerry L.; Siewerdsen, Jeffrey H.
2016-01-01
Advanced reconstruction methods for computed tomography include sophisticated forward models of the imaging system that capture the pertinent physical processes affecting the signal and noise in projection measurements. However, most do little to integrate prior knowledge of the subject – often relying only on very general notions of local smoothness or edges. In many cases, as in longitudinal surveillance or interventional imaging, a patient has undergone a sequence of studies prior to the current image acquisition that hold a wealth of prior information on patient-specific anatomy. While traditional techniques tend to treat each data acquisition as an isolated event and disregard such valuable patient-specific prior information, some reconstruction methods, such as PICCS[1] and PIR-PLE[2], can incorporate prior images into a reconstruction objective function. Inclusion of such information allows for dramatic reduction in the data fidelity requirements and more robustly accommodate substantial undersampling and exposure reduction with consequent benefits to imaging speed and reduced radiation dose. While such prior-image-based methods offer tremendous promise, the introduction of prior information in the reconstruction raises significant concern regarding the accurate representation of features in the image and whether those features arise from the current data acquisition or from the prior images. In this work we propose a novel framework to analyze the propagation of information in prior-image-based reconstruction by decomposing the estimation into distinct components supported by the current data acquisition and by the prior image. This decomposition quantifies the contributions from prior and current data as a spatial map and can trace specific features in the image to their source. Such “information source maps” can potentially be used as a check on confidence that a given image feature arises from the current data or from the prior and to more
4D image reconstruction for emission tomography
NASA Astrophysics Data System (ADS)
Reader, Andrew J.; Verhaeghe, Jeroen
2014-11-01
An overview of the theory of 4D image reconstruction for emission tomography is given along with a review of the current state of the art, covering both positron emission tomography and single photon emission computed tomography (SPECT). By viewing 4D image reconstruction as a matter of either linear or non-linear parameter estimation for a set of spatiotemporal functions chosen to approximately represent the radiotracer distribution, the areas of so-called ‘fully 4D’ image reconstruction and ‘direct kinetic parameter estimation’ are unified within a common framework. Many choices of linear and non-linear parameterization of these functions are considered (including the important case where the parameters have direct biological meaning), along with a review of the algorithms which are able to estimate these often non-linear parameters from emission tomography data. The other crucial components to image reconstruction (the objective function, the system model and the raw data format) are also covered, but in less detail due to the relatively straightforward extension from their corresponding components in conventional 3D image reconstruction. The key unifying concept is that maximum likelihood or maximum a posteriori (MAP) estimation of either linear or non-linear model parameters can be achieved in image space after carrying out a conventional expectation maximization (EM) update of the dynamic image series, using a Kullback-Leibler distance metric (comparing the modeled image values with the EM image values), to optimize the desired parameters. For MAP, an image-space penalty for regularization purposes is required. The benefits of 4D and direct reconstruction reported in the literature are reviewed, and furthermore demonstrated with simple simulation examples. It is clear that the future of reconstructing dynamic or functional emission tomography images, which often exhibit high levels of spatially correlated noise, should ideally exploit these 4D
Image reconstruction for robot assisted ultrasound tomography
NASA Astrophysics Data System (ADS)
Aalamifar, Fereshteh; Zhang, Haichong K.; Rahmim, Arman; Boctor, Emad M.
2016-04-01
An investigation of several image reconstruction methods for robot-assisted ultrasound (US) tomography setup is presented. In the robot-assisted setup, an expert moves the US probe to the location of interest, and a robotic arm automatically aligns another US probe with it. The two aligned probes can then transmit and receive US signals which are subsequently used for tomographic reconstruction. This study focuses on reconstruction of the speed of sound. In various simulation evaluations as well as in an experiment with a millimeter-range inaccuracy, we demonstrate that the limited data provided by two probes can be used to reconstruct pixel-wise images differentiating between media with different speeds of sound. Combining the results of this investigation with the developed robot-assisted US tomography setup, we envision feasibility of this setup for tomographic imaging in applications beyond breast imaging, with potentially significant efficacy in cancer diagnosis.
Image Reconstruction for Prostate Specific Nuclear Medicine imagers
Mark Smith
2007-01-11
There is increasing interest in the design and construction of nuclear medicine detectors for dedicated prostate imaging. These include detectors designed for imaging the biodistribution of radiopharmaceuticals labeled with single gamma as well as positron-emitting radionuclides. New detectors and acquisition geometries present challenges and opportunities for image reconstruction. In this contribution various strategies for image reconstruction for these special purpose imagers are reviewed. Iterative statistical algorithms provide a framework for reconstructing prostate images from a wide variety of detectors and acquisition geometries for PET and SPECT. The key to their success is modeling the physics of photon transport and data acquisition and the Poisson statistics of nuclear decay. Analytic image reconstruction methods can be fast and are useful for favorable acquisition geometries. Future perspectives on algorithm development and data analysis for prostate imaging are presented.
Bayesian image reconstruction: Application to emission tomography
Nunez, J.; Llacer, J.
1989-02-01
In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.
Knee imaging after anterior cruciate ligament reconstruction.
Rodrigues, M B; Silva, J J; Homsi, C; Stump, X M; Lecouvet, F E
2001-01-01
An increasing number of reconstructions of the anterior cruciate ligament (ACL) are performed every year, due to both the increasing occurrence of sport related injuries and the development of diagnostic and surgical techniques. The most used surgical procedure for the torn ACL reconstruction is the use of autogenous material, most often the patellar and semitendinosus tendons. Magnetic resonance (MR) imaging and spiral-CT performed after arthrography with multiplanar reconstructions are the imaging methods of choice for post-operative evaluation of ACL ligamentoplasty. This paper provides a brief bibliographic and more extensive pictorial review of the normal evolution and possible complications after ACL repair. PMID:11817479
MODEL-BASED IMAGE RECONSTRUCTION FOR MRI
Fessler, Jeffrey A.
2010-01-01
Magnetic resonance imaging (MRI) is a sophisticated and versatile medical imaging modality. Traditionally, MR images are reconstructed from the raw measurements by a simple inverse 2D or 3D fast Fourier transform (FFT). However, there are a growing number of MRI applications where a simple inverse FFT is inadequate, e.g., due to non-Cartesian sampling patterns, non-Fourier physical effects, nonlinear magnetic fields, or deliberate under-sampling to reduce scan times. Such considerations have led to increasing interest in methods for model-based image reconstruction in MRI. PMID:21135916
Chappelow, Jonathan; Tomaszewski, John E.; Feldman, Michael; Shih, Natalie; Madabhushi, Anant
2011-01-01
We present an interactive program called HistoStitcher© for accurate and rapid reassembly of histology fragments into a pseudo-whole digitized histological section. HistoStitcher© provides both an intuitive graphical interface to assist the operator in performing the stitch of adjacent histology fragments by selecting pairs of anatomical landmarks, and a set of computational routines for determining and applying an optimal linear transformation to generate the stitched image. Reconstruction of whole histological sections from images of slides containing smaller fragments is required in applications where preparation of whole sections of large tissue specimens is not feasible or efficient, and such whole mounts are required to facilitate (a) disease annotation and (b) image registration with radiological images. Unlike manual reassembly of image fragments in a general purpose image editing program (such as Photoshop), HistoStitcher© provides memory efficient operation on high resolution digitized histology images and a highly flexible stitching process capable of producing more accurate results in less time. Further, by parameterizing the series of transformations determined by the stitching process, the stitching parameters can be saved, loaded at a later time, refined, or reapplied to multi-resolution scans, or quickly transmitted to another site. In this paper, we describe in detail the design of HistoStitcher© and the mathematical routines used for calculating the optimal image transformation, and demonstrate its operation for stitching high resolution histology quadrants of a prostate specimen to form a digitally reassembled whole histology section, for 8 different patient studies. To evaluate stitching quality, a 6 point scoring scheme, which assesses the alignment and continuity of anatomical structures important for disease annotation, is employed by three independent expert pathologists. For 6 studies compared with this scheme, reconstructed sections
Heuristic optimization in penumbral image for high resolution reconstructed image
Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.
2010-10-15
Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.
BIOFILM IMAGE RECONSTRUCTION FOR ASSESSING STRUCTURAL PARAMETERS
Renslow, Ryan; Lewandowski, Zbigniew; Beyenal, Haluk
2011-01-01
The structure of biofilms can be numerically quantified from microscopy images using structural parameters. These parameters are used in biofilm image analysis to compare biofilms, to monitor temporal variation in biofilm structure, to quantify the effects of antibiotics on biofilm structure and to determine the effects of environmental conditions on biofilm structure. It is often hypothesized that biofilms with similar structural parameter values will have similar structures; however, this hypothesis has never been tested. The main goal was to test the hypothesis that the commonly used structural parameters can characterize the differences or similarities between biofilm structures. To achieve this goal 1) biofilm image reconstruction was developed as a new tool for assessing structural parameters, 2) independent reconstructions using the same starting structural parameters were tested to see how they differed from each other, 3) the effect of the original image parameter values on reconstruction success was evaluated and 4) the effect of the number and type of the parameters on reconstruction success was evaluated. It was found that two biofilms characterized by identical commonly used structural parameter values may look different, that the number and size of clusters in the original biofilm image affect image reconstruction success and that, in general, a small set of arbitrarily selected parameters may not reveal relevant differences between biofilm structures. PMID:21280029
Approach for reconstructing anisoplanatic adaptive optics images.
Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J
2007-08-20
Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366
Elasticity reconstructive imaging by means of stimulated echo MRI.
Chenevert, T L; Skovoroda, A R; O'Donnell, M; Emelianov, S Y
1998-03-01
A method is introduced to measure internal mechanical displacement and strain by means of MRI. Such measurements are needed to reconstruct an image of the elastic Young's modulus. A stimulated echo acquisition sequence with additional gradient pulses encodes internal displacements in response to an externally applied differential deformation. The sequence provides an accurate measure of static displacement by limiting the mechanical transitions to the mixing period of the simulated echo. Elasticity reconstruction involves definition of a region of interest having uniform Young's modulus along its boundary and subsequent solution of the discretized elasticity equilibrium equations. Data acquisition and reconstruction were performed on a urethane rubber phantom of known elastic properties and an ex vivo canine kidney phantom using <2% differential deformation. Regional elastic properties are well represented on Young's modulus images. The long-term objective of this work is to provide a means for remote palpation and elasticity quantitation in deep tissues otherwise inaccessible to manual palpation. PMID:9498605
Efficient MR image reconstruction for compressed MR imaging.
Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris
2011-10-01
In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542
Efficient MR image reconstruction for compressed MR imaging.
Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris
2010-01-01
In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
Image reconstruction algorithms with wavelet filtering for optoacoustic imaging
NASA Astrophysics Data System (ADS)
Gawali, S.; Leggio, L.; Broadway, C.; González, P.; Sánchez, M.; Rodríguez, S.; Lamela, H.
2016-03-01
Optoacoustic imaging (OAI) is a hybrid biomedical imaging modality based on the generation and detection of ultrasound by illuminating the target tissue by laser light. Typically, laser light in visible or near infrared spectrum is used as an excitation source. OAI is based on the implementation of image reconstruction algorithms using the spatial distribution of optical absorption in tissues. In this work, we apply a time-domain back-projection (BP) reconstruction algorithm and a wavelet filtering for point and line detection, respectively. A comparative study between point detection and integrated line detection has been carried out by evaluating their effects on the image reconstructed. Our results demonstrate that the back-projection algorithm proposed is efficient for reconstructing high-resolution images of absorbing spheres embedded in a non-absorbing medium when it is combined with the wavelet filtering.
Coronary x-ray angiographic reconstruction and image orientation
Sprague, Kevin; Drangova, Maria; Lehmann, Glen
2006-03-15
We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel centerlines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Regularized reconstruction of wave fields from refracted images of water
NASA Astrophysics Data System (ADS)
Choudhury, K. Roy; O'Sullivan, F.; Samanta, M.; Shrira, V.; Caulliez, G.
2009-04-01
Refractive imaging of wave fields is often used for observation of short gravity and gravity-capillary waves in wave tanks and in the field. A light box placed under the waves emits light of spatially graduated intensity. The refracted light intensity image recorded overhead can be related to the wave slope field using a system of equations derived from the laws of refraction. Previous authors have proposed a two stage reconstruction strategy for the recovery of wave slope and height fields: i) estimation of local slope fields ii) global reconstruction of height and slope fields using local estimates. Our statistical analysis of local slope estimates reveals that estimation error variability increases considerably from the bright to the dark ends of the imaging area, with some concomitant bias. The reconstruction problem behaves like an ill posed inverse problem in the dark areas of the image. Illposedness is addressed by a reconstruction method that imposes Tikhonov regularization of directional wave slopes using penalized least squares. Other refinements proposed include a) bias correction of local slope estimates b) spatially weighted reconstruction using estimated variability of local slope estimates and c) more accurate estimates of reference light profiles from time sequence data. A computationally efficient algorithm that exploits sparsity in the resulting system of equations is employed to evaluate the regularized estimator. Simulation studies show that the refinements can result in substantial improvements in the mean squared error of reconstruction. The algorithm is applied to obtain wave field reconstructions from video recordings. Analysis of various video sequences demonstrates distinct spatial patterns at different wind speed and fetch combinations.
Joint model of motion and anatomy for PET image reconstruction
Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama
2007-12-15
Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem.
Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk
2015-03-15
Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Medical Imaging Inspired Vertex Reconstruction at LHC
NASA Astrophysics Data System (ADS)
Hageböck, S.; von Toerne, E.
2012-12-01
Three-dimensional image reconstruction in medical applications (PET or X-ray CT) utilizes sophisticated filter algorithms to linear trajectories of coincident photon pairs or x-rays. The goal is to reconstruct an image of an emitter density distribution. In a similar manner, tracks in particle physics originate from vertices that need to be distinguished from background track combinations. In this study it is investigated if vertex reconstruction in high energy proton collisions may benefit from medical imaging methods. A new method of vertex finding, the Medical Imaging Vertexer (MIV), is presented based on a three-dimensional filtered backprojection algorithm. It is compared to the open-source RAVE vertexing package. The performance of the vertex finding algorithms is evaluated as a function of instantaneous luminosity using simulated LHC collisions. Tracks in these collisions are described by a simplified detector model which is inspired by the tracking performance of the LHC experiments. At high luminosities (25 pileup vertices and more), the medical imaging approach finds vertices with a higher efficiency and purity than the RAVE “Adaptive Vertex Reconstructor” algorithm. It is also much faster if more than 25 vertices are to be reconstructed because the amount of CPU time rises linearly with the number of tracks whereas it rises quadratically for the adaptive vertex fitter AVR.
Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.
2013-01-01
Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778
Image reconstruction for PET/CT scanners: past achievements and future challenges
Tong, Shan; Alessio, Adam M; Kinahan, Paul E
2011-01-01
PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831
Image reconstructions with the rotating RF coil.
Trakic, A; Wang, H; Weber, E; Li, B K; Poole, M; Liu, F; Crozier, S
2009-12-01
Recent studies have shown that rotating a single RF transceive coil (RRFC) provides a uniform coverage of the object and brings a number of hardware advantages (i.e. requires only one RF channel, averts coil-coil coupling interactions and facilitates large-scale multi-nuclear imaging). Motion of the RF coil sensitivity profile however violates the standard Fourier Transform definition of a time-invariant signal, and the images reconstructed in this conventional manner can be degraded by ghosting artifacts. To overcome this problem, this paper presents Time Division Multiplexed-Sensitivity Encoding (TDM-SENSE), as a new image reconstruction scheme that exploits the rotation of the RF coil sensitivity profile to facilitate ghost-free image reconstructions and reductions in image acquisition time. A transceive RRFC system for head imaging at 2 Tesla was constructed and applied in a number of in vivo experiments. In this initial study, alias-free head images were obtained in half the usual scan time. It is hoped that new sequences and methods will be developed by taking advantage of coil motion. PMID:19800824
Image reconstructions with the rotating RF coil
NASA Astrophysics Data System (ADS)
Trakic, A.; Wang, H.; Weber, E.; Li, B. K.; Poole, M.; Liu, F.; Crozier, S.
2009-12-01
Recent studies have shown that rotating a single RF transceive coil (RRFC) provides a uniform coverage of the object and brings a number of hardware advantages (i.e. requires only one RF channel, averts coil-coil coupling interactions and facilitates large-scale multi-nuclear imaging). Motion of the RF coil sensitivity profile however violates the standard Fourier Transform definition of a time-invariant signal, and the images reconstructed in this conventional manner can be degraded by ghosting artifacts. To overcome this problem, this paper presents Time Division Multiplexed — Sensitivity Encoding (TDM-SENSE), as a new image reconstruction scheme that exploits the rotation of the RF coil sensitivity profile to facilitate ghost-free image reconstructions and reductions in image acquisition time. A transceive RRFC system for head imaging at 2 Tesla was constructed and applied in a number of in vivo experiments. In this initial study, alias-free head images were obtained in half the usual scan time. It is hoped that new sequences and methods will be developed by taking advantage of coil motion.
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET
NASA Astrophysics Data System (ADS)
Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.
2016-05-01
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.
Region of interest motion compensation for PET image reconstruction.
Qiao, Feng; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2007-05-21
A motion-incorporated reconstruction (MIR) method for gated PET imaging has recently been developed by several authors to correct for respiratory motion artifacts in PET imaging. This method however relies on a motion map derived from images (4D PET or 4D CT) of the entire field of view (FOV). In this study we present a region of interest (ROI)-based extension to this method, whereby only the motion map of a user-defined ROI is required and motion incorporation during image reconstruction is solely performed within the ROI. A phantom study and an NCAT computer simulation study were performed to test the feasibility of this method. The phantom study showed that the ROI-based MIR produced results that are within 1.26% of those obtained by the full image-based MIR approach when using the same accurate motion information. The NCAT phantom study on the other hand, further verified that motion of features of interest in an image can be estimated more efficiently and potentially more accurately using the ROI-based approach. A reduction of motion estimation time from 450 s to 30 and 73 s was achieved for two different ROIs respectively. In addition, the ROI-based approach showed a reduction in registration error of 43% for one ROI, which effectively reduced quantification bias by 44% and 32% using mean and maximum voxel values, respectively. PMID:17473344
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET.
Goorden, Marlies C; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J
2016-05-21
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning (99m)Tc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes ('multiple-pinhole paths' (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging (18)F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport. PMID:27082049
NASA Astrophysics Data System (ADS)
Yuan, Xiaohui; Ozturk, Cengizhan; Chi-Fishman, Gloria
2007-03-01
This paper describe our work on tagline detection and tissue strain synthesis. The tagline detection method extends our previous work 16 using pseudo-wavelet reconstruction. The novelty in tagline detection is that we integrated an active contour model and successfully improved the detection and indexing performance. Using pseudo-wavelet reconstruction-based method, prominent wavelet coefficients were retained while others were eliminated. Taglines were then extracted from the reconstructed images using thresholding. Due to noise and artifacts, a tagline can be broken into segments. We employed an active contour model that tracks the most likely segments and bridges them. Experiments demonstrated that our method extracts taglines automatically with greater robustness. Tissue strain was also reconstructed using extracted taglines.
3D EIT image reconstruction with GREIT.
Grychtol, Bartłomiej; Müller, Beat; Adler, Andy
2016-06-01
Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184
Fast Image Reconstruction with L2-Regularization
Bilgic, Berkin; Chatnuntawech, Itthi; Fan, Audrey P.; Setsompop, Kawin; Cauley, Stephen F.; Wald, Lawrence L.; Adalsteinsson, Elfar
2014-01-01
Purpose We introduce L2-regularized reconstruction algorithms with closed-form solutions that achieve dramatic computational speed-up relative to state of the art L1- and L2-based iterative algorithms while maintaining similar image quality for various applications in MRI reconstruction. Materials and Methods We compare fast L2-based methods to state of the art algorithms employing iterative L1- and L2-regularization in numerical phantom and in vivo data in three applications; 1) Fast Quantitative Susceptibility Mapping (QSD), 2) Lipid artifact suppression in Magnetic Resonance Spectroscopic Imaging (MRSI), and 3) Diffusion Spectrum Imaging (DSI). In all cases, proposed L2-based methods are compared with the state of the art algorithms, and two to three orders of magnitude speed up is demonstrated with similar reconstruction quality. Results The closed-form solution developed for regularized QSM allows processing of a 3D volume under 5 seconds, the proposed lipid suppression algorithm takes under 1 second to reconstruct single-slice MRSI data, while the PCA based DSI algorithm estimates diffusion propagators from undersampled q-space for a single slice under 30 seconds, all running in Matlab using a standard workstation. Conclusion For the applications considered herein, closed-form L2-regularization can be a faster alternative to its iterative counterpart or L1-based iterative algorithms, without compromising image quality. PMID:24395184
Optimal Statistical Approach to Optoacoustic Image Reconstruction
NASA Astrophysics Data System (ADS)
Zhulina, Yulia V.
2000-11-01
An optimal statistical approach is applied to the task of image reconstruction in photoacoustics. The physical essence of the task is as follows: Pulse laser irradiation induces an ultrasound wave on the inhomogeneities inside the investigated volume. This acoustic wave is received by the set of receivers outside this volume. It is necessary to reconstruct a spatial image of these inhomogeneities. Developed mathematical techniques of the radio location theory are used for solving the task. An algorithm of maximum likelihood is synthesized for the image reconstruction. The obtained algorithm is investigated by digital modeling. The number of receivers and their disposition in space are arbitrary. Results of the synthesis are applied to noninvasive medical diagnostics (breast cancer). The capability of the algorithm is tested on real signals. The image is built with use of signals obtained in vitro . The essence of the algorithm includes (i) summing of all signals in the image plane with the transform from the time coordinates of signals to the spatial coordinates of the image and (ii) optimal spatial filtration of this sum. The results are shown in the figures.
Stochastic image reconstruction for a dual-particle imaging system
NASA Astrophysics Data System (ADS)
Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.
2016-02-01
Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.
Sampling conditions for gradient-magnitude sparsity based image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-03-01
Image reconstruction from sparse-view data in 2D fan-beam CT is investigated by constrained, total-variation minimization. This optimization problem exploits possible sparsity in the gradient magnitude image (GMI). The investigation is performed in simulation under ideal, noiseless data conditions in order to reveal a possible link between GMI sparsity and the necessary number of projection views for reconstructing an accurate image. Results are shown for two, quite different phantoms of similar GMI sparsity.
Simultaneous algebraic reconstruction technique based on guided image filtering.
Ji, Dongjiang; Qu, Gangrong; Liu, Baodong
2016-07-11
The challenge of computed tomography is to reconstruct high-quality images from few-view projections. Using a prior guidance image, guided image filtering smoothes images while preserving edge features. The prior guidance image can be incorporated into the image reconstruction process to improve image quality. We propose a new simultaneous algebraic reconstruction technique based on guided image filtering. Specifically, the prior guidance image is updated in the image reconstruction process, merging information iteratively. To validate the algorithm practicality and efficiency, experiments were performed with numerical phantom projection data and real projection data. The results demonstrate that the proposed method is effective and efficient for nondestructive testing and rock mechanics. PMID:27410859
Optimal Discretization Resolution in Algebraic Image Reconstruction
NASA Astrophysics Data System (ADS)
Sharif, Behzad; Kamalabadi, Farzad
2005-11-01
In this paper, we focus on data-limited tomographic imaging problems where the underlying linear inverse problem is ill-posed. A typical regularized reconstruction algorithm uses algebraic formulation with a predetermined discretization resolution. If the selected resolution is too low, we may loose useful details of the underlying image and if it is too high, the reconstruction will be unstable and the representation will fit irrelevant features. In this work, two approaches are introduced to address this issue. The first approach is using Mallow's CL method or generalized cross-validation. For each of the two methods, a joint estimator of regularization parameter and discretization resolution is proposed and their asymptotic optimality is investigated. The second approach is a Bayesian estimator of the model order using a complexity-penalizing prior. Numerical experiments focus on a space imaging application from a set of limited-angle tomographic observations.
Calibration of time-of-flight cameras for accurate intraoperative surface reconstruction
Mersmann, Sven; Seitel, Alexander; Maier-Hein, Lena; Erz, Michael; Jähne, Bernd; Nickel, Felix; Mieth, Markus; Mehrabi, Arianeb
2013-08-15
Purpose: In image-guided surgery (IGS) intraoperative image acquisition of tissue shape, motion, and morphology is one of the main challenges. Recently, time-of-flight (ToF) cameras have emerged as a new means for fast range image acquisition that can be used for multimodal registration of the patient anatomy during surgery. The major drawbacks of ToF cameras are systematic errors in the image acquisition technique that compromise the quality of the measured range images. In this paper, we propose a calibration concept that, for the first time, accounts for all known systematic errors affecting the quality of ToF range images. Laboratory and in vitro experiments assess its performance in the context of IGS.Methods: For calibration the camera-related error sources depending on the sensor, the sensor temperature and the set integration time are corrected first, followed by the scene-specific errors, which are modeled as function of the measured distance, the amplitude and the radial distance to the principal point of the camera. Accounting for the high accuracy demands in IGS, we use a custom-made calibration device to provide reference distance data, the cameras are calibrated too. To evaluate the mitigation of the error, the remaining residual error after ToF depth calibration was compared with that arising from using the manufacturer routines for several state-of-the-art ToF cameras. The accuracy of reconstructed ToF surfaces was investigated after multimodal registration with computed tomography (CT) data of liver models by assessment of the target registration error (TRE) of markers introduced in the livers.Results: For the inspected distance range of up to 2 m, our calibration approach yielded a mean residual error to reference data ranging from 1.5 ± 4.3 mm for the best camera to 7.2 ± 11.0 mm. When compared to the data obtained from the manufacturer routines, the residual error was reduced by at least 78% from worst calibration result to most accurate
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaginga)
Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B.; Jia, Xun
2014-01-01
Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase
Building reconstruction from images and laser scanning
NASA Astrophysics Data System (ADS)
Brenner, Claus
2005-03-01
The automatic extraction of objects from laser scans and images has been a topic of research for decades. Nowadays, with new services expected, especially in the area of navigation systems, location based services, and augmented reality, the need for automated, efficient extraction systems becomes more urgent than ever. This paper reviews a number of automatic and semi-automatic reconstruction methods in more detail in order to reveal their underlying principles. It then discusses some general properties of reconstruction approaches which have evolved. This shows that, although research is still far from the goal of the initially envisioned fully automatic reconstruction systems, there is now a much better understanding of the problem and the ways it can be tackled.
Taylor, Adam J; Graham, Daniel J; Castner, David G
2015-09-01
To properly process and reconstruct 3D ToF-SIMS data from systems such as multi-component polymers, drug delivery scaffolds, cells and tissues, it is important to understand the sputtering behavior of the sample. Modern cluster sources enable efficient and stable sputtering of many organics materials. However, not all materials sputter at the same rate and few studies have explored how different sputter rates may distort reconstructed depth profiles of multicomponent materials. In this study spun-cast bilayer polymer films of polystyrene and PMMA are used as model systems to optimize methods for the reconstruction of depth profiles in systems exhibiting different sputter rates between components. Transforming the bilayer depth profile from sputter time to depth using a single sputter rate fails to account for sputter rate variations during the profile. This leads to inaccurate apparent layer thicknesses and interfacial positions, as well as the appearance of continued sputtering into the substrate. Applying measured single component sputter rates to the bilayer films with a step change in sputter rate at the interfaces yields more accurate film thickness and interface positions. The transformation can be further improved by applying a linear sputter rate transition across the interface, thus modeling the sputter rate changes seen in polymer blends. This more closely reflects the expected sputtering behavior. This study highlights the need for both accurate evaluation of component sputter rates and the careful conversion of sputter time to depth, if accurate 3D reconstructions of complex multi-component organic and biological samples are to be achieved. The effects of errors in sputter rate determination are also explored. PMID:26185799
Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U
1997-04-01
It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. PMID:9096089
Propagation phasor approach for holographic image reconstruction
NASA Astrophysics Data System (ADS)
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-03-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-01-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671
A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures.
Mangipudi, K R; Radisch, V; Holzer, L; Volkert, C A
2016-04-01
We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. PMID:26906523
Performance-based assessment of reconstructed images
Hanson, Kenneth
2009-01-01
During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.
Hyperspectral image reconstruction for diffuse optical tomography
Larusson, Fridrik; Fantini, Sergio; Miller, Eric L.
2011-01-01
We explore the development and performance of algorithms for hyperspectral diffuse optical tomography (DOT) for which data from hundreds of wavelengths are collected and used to determine the concentration distribution of chromophores in the medium under investigation. An efficient method is detailed for forming the images using iterative algorithms applied to a linearized Born approximation model assuming the scattering coefficient is spatially constant and known. The L-surface framework is employed to select optimal regularization parameters for the inverse problem. We report image reconstructions using 126 wavelengths with estimation error in simulations as low as 0.05 and mean square error of experimental data of 0.18 and 0.29 for ink and dye concentrations, respectively, an improvement over reconstructions using fewer specifically chosen wavelengths. PMID:21483616
A two-step Hilbert transform method for 2D image reconstruction.
Noo, Frédéric; Clackdoyle, Rolf; Pack, Jed D
2004-09-01
The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fanbeam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained. PMID:15470913
Deep Reconstruction Models for Image Set Classification.
Hayat, Munawar; Bennamoun, Mohammed; An, Senjian
2015-04-01
Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289
NASA Astrophysics Data System (ADS)
Chen, Yujia; Wang, Kun; Gursoy, Doga; Soriano, Carmen; De Carlo, Francesco; Anastasio, Mark A.
2016-03-01
Propagation-based X-ray phase-contrast tomography (XPCT) provides the opportunity to image weakly absorbing objects and is being explored actively for a variety of important pre-clinical applications. Quantitative XPCT image reconstruction methods typically involve a phase retrieval step followed by application of an image reconstruction algorithm. Most approaches to phase retrieval require either acquiring multiple images at different object-to-detector distances or introducing simplifying assumptions, such as a single-material assumption, to linearize the imaging model. In order to overcome these limitations, a non-linear image reconstruction method has been proposed previously that jointly estimates the absorption and refractive properties of an object from XPCT projection data acquired at a single propagation distance, without the need to linearize the imaging model. However, the numerical properties of the associated non-convex optimization problem remain largely unexplored. In this study, computer simulations are conducted to investigate the feasibility of the joint reconstruction problem in practice. We demonstrate that the joint reconstruction problem is ill-posed and sensitive to system inconsistencies. Particularly, the method can generate accurate refractive index images only if the object is thin and has no phase-wrapping in the data. However, we also observed that, for weakly absorbing objects, the refractive index images reconstructed by the joint reconstruction method are, in general, more accurate than those reconstructed using methods that simply ignore the object's absorption.
Portable and accurate 3D scanner for breast implant design and reconstructive plastic surgery
NASA Astrophysics Data System (ADS)
Rigotti, Camilla; Borghese, Nunzio A.; Ferrari, Stefano; Baroni, Guido; Ferrigno, Giancarlo
1998-06-01
In order to evaluate the proper breast implant, the surgeon relies on a standard set of measurements manually taken on the subject. This approach does not allow to obtain an accurate reconstruction of the breast shape and asymmetries can easily arise after surgery. The purpose of this work is to present a method which can help the surgeon in the choice of the shape and dimensions of a prosthesis allowing for a perfect symmetry between the prosthesis and the controlateral breast and can be used as a 3D visual feedback in plastic surgery.
The gridding method for image reconstruction by Fourier transformation
Schomberg, H.; Timmer, J.
1995-09-01
This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform {cflx f}. The method involves a window function {cflx w} and proceeds in three steps. First, the convolution {cflx g} = {cflx w} * {cflx f} is computed numerically on a Cartesian grid, using the available samples of {cflx f}. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating {cflx w} * {cflx f} is much less error prone than merely interpolating {cflx f}. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform.
Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness
2015-01-01
Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073
Tomographic image reconstruction using systolic array algorithms
Azevedo, S.G.; DeGroot, A.J.; Schneberk, D.J.; Brase, J.M.; Martz, H.E.; Jain, A.K.; Current, K.W.; Hurst, P.J.
1988-12-22
Image reconstruction for Computed Tomography (CT) is a time consuming operation on current uniprocessor computers and even on array processors. This is particularly true for three-dimensional data sets or for limited-data reconstructions requiring iterative procedures. In these cases, the projection operation (Radon transform) and its inverse (filtered back-projection) are major computational tasks that are performed many times. Multiprocessor computers, especially in systolic array configurations, can provide dramatic improvements in reconstruction times at reasonable costs. An in-house systolic processor, called SPRINT, has been programmed to demonstrate these improved speeds while achieving near 100% efficiency of all processor elements. We report on these results in this paper. In addition, two proposed hardware implementations of a new architecture are shown to have even greater speedup possibilities. One, using standard DSP chips, has been simulated to give a factor of three improvement over SPRINT, while the other, using custom VLSI that is now in the early stages of design, could potentially perform 512/sup 2/ reconstructions at video rates (100 times further speedup). These processors are also interconnected in a systolic array configuration. Experimental and projected results, with future plans, are also reported in this paper. 11 refs., 5 figs., 1 tab.
Scattering robust 3D reconstruction via polarized transient imaging.
Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai
2016-09-01
Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944
A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
A biological phantom for evaluation of CT image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.
2014-03-01
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent
2013-11-01
The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.
Regularization design and control of change admission in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Stayman, J. Webster
2014-03-01
Nearly all reconstruction methods are controlled through various parameter selections. Traditionally, such parameters are used to specify a particular noise and resolution trade-off in the reconstructed image volumes. The introduction of reconstruction methods that incorporate prior image information has demonstrated dramatic improvements in dose utilization and image quality, but has complicated the selection of reconstruction parameters including those associated with balancing information used from prior images with that of the measurement data. While a noise-resolution tradeoff still exists, other potentially detrimental effects are possible with poor prior image parameter values including the possible introduction of false features and the failure to incorporate sufficient prior information to gain any improvements. Traditional parameter selection methods such as heuristics based on similar imaging scenarios are subject to error and suboptimal solutions while exhaustive searches can involve a large number of time-consuming iterative reconstructions. We propose a novel approach that prospectively determines optimal prior image regularization strength to accurately admit specific anatomical changes without performing full iterative reconstructions. This approach leverages analytical approximations to the implicitly defined prior image-based reconstruction solution and predictive metrics used to estimate imaging performance. The proposed method is investigated in phantom experiments and the shift-variance and data-dependence of optimal prior strength is explored. Optimal regularization based on the predictive approach is shown to agree well with traditional exhaustive reconstruction searches, while yielding substantial reductions in computation time. This suggests great potential of the proposed methodology in allowing for prospective patient-, data-, and change-specific customization of prior-image penalty strength to ensure accurate reconstruction of specific
Shih, Cheng-Ting; Chang, Yuan-Jen; Hsu, Jui-Ting; Chuang, Keh-Shih; Chang, Shu-Jun; Wu, Jay
2015-12-01
Optical computed tomography (optical CT) has been proven to be a useful tool for dose readouts of polymer gel dosimeters. In this study, the algebraic reconstruction technique (ART) for image reconstruction of gel dosimeters was used to improve the image quality of optical CT. Cylindrical phantoms filled with N-isopropyl-acrylamide polymer gels were irradiated using a medical linear accelerator. A circular dose distribution and a hexagonal dose distribution were produced by applying the VMAT technique and the six-field dose delivery, respectively. The phantoms were scanned using optical CT, and the images were reconstructed using the filtered back-projection (FBP) algorithm and the ART. For the circular dose distribution, the ART successfully reduced the ring artifacts and noise in the reconstructed image. For the hexagonal dose distribution, the ART reduced the hot spots at the entrances of the beams and increased the dose uniformity in the central region. Within 50% isodose line, the gamma pass rates for the 2 mm/3% criteria for the ART and FBP were 99.2% and 88.1%, respectively. The ART could be used for the reconstruction of optical CT images to improve image quality and provide accurate dose conversion for polymer gel dosimeters. PMID:26165178
NASA Astrophysics Data System (ADS)
Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.
2009-12-01
A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.
Nonlinear dual reconstruction of SPECT activity and attenuation images.
Liu, Huafeng; Guo, Min; Hu, Zhenghui; Shi, Pengcheng; Hu, Hongjie
2014-01-01
In single photon emission computed tomography (SPECT), accurate attenuation maps are needed to perform essential attenuation compensation for high quality radioactivity estimation. Formulating the SPECT activity and attenuation reconstruction tasks as coupled signal estimation and system parameter identification problems, where the activity distribution and the attenuation parameter are treated as random variables with known prior statistics, we present a nonlinear dual reconstruction scheme based on the unscented Kalman filtering (UKF) principles. In this effort, the dynamic changes of the organ radioactivity distribution are described through state space evolution equations, while the photon-counting SPECT projection data are measured through the observation equations. Activity distribution is then estimated with sub-optimal fixed attenuation parameters, followed by attenuation map reconstruction given these activity estimates. Such coupled estimation processes are iteratively repeated as necessary until convergence. The results obtained from Monte Carlo simulated data, physical phantom, and real SPECT scans demonstrate the improved performance of the proposed method both from visual inspection of the images and a quantitative evaluation, compared to the widely used EM-ML algorithms. The dual estimation framework has the potential to be useful for estimating the attenuation map from emission data only and thus benefit the radioactivity reconstruction. PMID:25225796
Nonlinear Dual Reconstruction of SPECT Activity and Attenuation Images
Liu, Huafeng; Guo, Min; Hu, Zhenghui; Shi, Pengcheng; Hu, Hongjie
2014-01-01
In single photon emission computed tomography (SPECT), accurate attenuation maps are needed to perform essential attenuation compensation for high quality radioactivity estimation. Formulating the SPECT activity and attenuation reconstruction tasks as coupled signal estimation and system parameter identification problems, where the activity distribution and the attenuation parameter are treated as random variables with known prior statistics, we present a nonlinear dual reconstruction scheme based on the unscented Kalman filtering (UKF) principles. In this effort, the dynamic changes of the organ radioactivity distribution are described through state space evolution equations, while the photon-counting SPECT projection data are measured through the observation equations. Activity distribution is then estimated with sub-optimal fixed attenuation parameters, followed by attenuation map reconstruction given these activity estimates. Such coupled estimation processes are iteratively repeated as necessary until convergence. The results obtained from Monte Carlo simulated data, physical phantom, and real SPECT scans demonstrate the improved performance of the proposed method both from visual inspection of the images and a quantitative evaluation, compared to the widely used EM-ML algorithms. The dual estimation framework has the potential to be useful for estimating the attenuation map from emission data only and thus benefit the radioactivity reconstruction. PMID:25225796
Bayesian Image Reconstruction in Quantitative Photoacoustic Tomography.
Tarvainen, Tanja; Pulkkinen, Aki; Cox, Ben; Kaipio, Jari; Arridge, Simon
2013-08-30
Quantitative photoacoustic tomography is an emerging imaging technique aimed at estimating chromophore concentrations inside tissues from photoacoustic images, which are formed by combining optical information and ultrasonic propagation. This is a hybrid imaging problem in which the solution of one inverse problem acts as the data for another ill-posed inverse problem. In the optical reconstruction of quantitative photoacoustic tomography, the data is obtained as a solution of an acoustic inverse initial value problem. Thus, both the data and the noise are affected by the method applied to solve the acoustic inverse problem. In this paper, the noise of optical data is modelled as Gaussian distributed with mean and covariance approximated by solving several acoustic inverse initial value problems using acoustic noise samples as data. Furthermore, Bayesian approximation error modelling is applied to compensate for the modelling errors in the optical data caused by the acoustic solver. The results show that modelling of the noise statistics and the approximation errors can improve the optical reconstructions. PMID:24001987
NASA Astrophysics Data System (ADS)
Tscharf, A.; Rumpler, M.; Fraundorfer, F.; Mayer, G.; Bischof, H.
2015-08-01
During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to
Wavelet-based stereo images reconstruction using depth images
NASA Astrophysics Data System (ADS)
Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried
2007-09-01
It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate
Pulsed holography for combustion diagnostics. [image reconstruction
NASA Technical Reports Server (NTRS)
Klein, N.; Dewilde, M. A.
1980-01-01
Image reconstruction and data extraction techniques were considered with respect to their application to combustion diagnostics. A system was designed and constructed that possesses sufficient stability and resolution to make quantitative data extraction possible. Example data were manually processed using the system to demonstrate its feasibility for the purpose intended. The system was interfaced with the PDP-11-04 computer for maximum design capability. It was concluded that the use of specialized digital hardware controlled by a relatively small computer provides the best combination of accuracy, speed, and versatility for this particular problem area.
Sidky, Emil Y; Anastasio, Mark A; Pan, Xiaochuan
2010-05-10
Propagation-based X-ray phase-contrast tomography (PCT) seeks to reconstruct information regarding the complex-valued refractive index distribution of an object. In many applications, a boundary-enhanced image is sought that reveals the locations of discontinuities in the real-valued component of the refractive index distribution. We investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT that exploit the fact that a boundary-enhanced PCT image, or its gradient, is often sparse. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the l(1)-norm or TV-norm of the image, subject to data consistency constraints. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data. PMID:20588896
Regularized image reconstruction for continuously self-imaging gratings.
Horisaki, Ryoichi; Piponnier, Martin; Druart, Guillaume; Guérineau, Nicolas; Primot, Jérôme; Goudail, François; Taboury, Jean; Tanida, Jun
2013-06-01
In this paper, we demonstrate two image reconstruction schemes for continuously self-imaging gratings (CSIGs). CSIGs are diffractive optical elements that generate a depth-invariant propagation pattern and sample objects with a sparse spatial frequency spectrum. To compensate for the sparse sampling, we apply two methods with different regularizations for CSIG imaging. The first method employs continuity of the spatial frequency spectrum, and the second one uses sparsity of the intensity pattern. The two methods are demonstrated with simulations and experiments. PMID:23736336
Three-dimensional reconstruction of laser-imploded targets from simulated pinhole images.
Xu, Peng; Bai, Yonglin; Bai, Xiaohong; Liu, Baiyu; Ouyang, Xian; Wang, Bo; Yang, Wenzheng; Gou, Yongsheng; Zhu, Bingli; Qin, Junjun
2012-11-10
This paper proposes an integral method to achieve a more accurate weighting matrix that makes very positive contributions to the image reconstruction in inertial confinement fusion research. Standard algebraic reconstruction techniques with a positivity constraint included are utilized. The final normalized mean-square error between the simulated and reconstructed projection images is 0.000365%, which is a nearly perfect result, indicating that the weighting matrix is very important. Compared with the error between the simulated and reconstructed phantoms, which is 2.35%, it seems that the improvement of the accuracy of the projection image does not mean the improvement of the phantom. The proposed method can reconstruct a simulated laser-imploded target consisting of 100×100×100 voxels. PMID:23142895
Three-dimensional reconstruction of light microscopy image sections: present and future.
Wang, Yuzhen; Xu, Rui; Luo, Gaoxing; Wu, Jun
2015-03-01
Three-dimensional (3D) image reconstruction technologies can reveal previously hidden microstructures in human tissue. However, the lack of ideal, non-destructive cross-sectional imaging techniques is still a problem. Despite some drawbacks, histological sectioning remains one of the most powerful methods for accurate high-resolution representation of tissue structures. Computer technologies can produce 3D representations of interesting human tissue and organs that have been serial-sectioned, dyed or stained, imaged, and segmented for 3D visualization. 3D reconstruction also has great potential in the fields of tissue engineering and 3D printing. This article outlines the most common methods for 3D tissue section reconstruction. We describe the most important academic concepts in this field, and provide critical explanations and comparisons. We also note key steps in the reconstruction procedures, and highlight recent progress in the development of new reconstruction methods. PMID:24952302
Prior image constrained image reconstruction in emerging computed tomography applications
NASA Astrophysics Data System (ADS)
Brunner, Stephen T.
Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation
SU-E-I-73: Clinical Evaluation of CT Image Reconstructed Using Interior Tomography
Zhang, J; Ge, G; Winkler, M; Cong, W; Wang, G
2014-06-01
Purpose: Radiation dose reduction has been a long standing challenge in CT imaging of obese patients. Recent advances in interior tomography (reconstruction of an interior region of interest (ROI) from line integrals associated with only paths through the ROI) promise to achieve significant radiation dose reduction without compromising image quality. This study is to investigate the application of this technique in CT imaging through evaluating imaging quality reconstructed from patient data. Methods: Projection data were directly obtained from patients who had CT examinations in a Dual Source CT scanner (DSCT). Two detectors in a DSCT acquired projection data simultaneously. One detector provided projection data for full field of view (FOV, 50 cm) while another detectors provided truncated projection data for a FOV of 26 cm. Full FOV CT images were reconstructed using both filtered back projection and iterative algorithm; while interior tomography algorithm was implemented to reconstruct ROI images. For comparison reason, FBP was also used to reconstruct ROI images. Reconstructed CT images were evaluated by radiologists and compared with images from CT scanner. Results: The results show that the reconstructed ROI image was in excellent agreement with the truth inside the ROI, obtained from images from CT scanner, and the detailed features in the ROI were quantitatively accurate. Radiologists evaluation shows that CT images reconstructed with interior tomography met diagnosis requirements. Radiation dose may be reduced up to 50% using interior tomography, depending on patient size. Conclusion: This study shows that interior tomography can be readily employed in CT imaging for radiation dose reduction. It may be especially useful in imaging obese patients, whose subcutaneous tissue is less clinically relevant but may significantly increase radiation dose.
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost. PMID:26390452
Image reconstruction with uncertainty quantification in photoacoustic tomography.
Tick, Jenni; Pulkkinen, Aki; Tarvainen, Tanja
2016-04-01
Photoacoustic tomography is a hybrid imaging method that combines optical contrast and ultrasound resolution. The goal of photoacoustic tomography is to resolve an initial pressure distribution from detected ultrasound waves generated within an object due to an illumination of a short light pulse. In this work, a Bayesian approach to photoacoustic tomography is described. The solution of the inverse problem is derived and computation of the point estimates for image reconstruction and uncertainty quantification is described. The approach is investigated with simulations in different detector geometries, including limited view setup, and with different detector properties such as ideal point-like detectors, finite size detectors, and detectors with a finite bandwidth. The results show that the Bayesian approach can be used to provide accurate estimates of the initial pressure distribution, as well as information about the uncertainty of the estimates. PMID:27106341
NASA Astrophysics Data System (ADS)
Toldo, R.; Fantini, F.; Giona, L.; Fantoni, S.; Fusiello, A.
2013-02-01
A novel multi-view stereo reconstruction method is presented. The algorithm is focused on accuracy and it is highly engineered with some parts taking advantage of the graphics processing unit. In addition, it is seamlessly integrated with the output of a structure and motion pipeline. In the first part of the algorithm a depth map is extracted independently for each image. The final depth map is generated from the depth hypothesis using a Markov random field optimization technique over the image grid. An octree data structure accumulates the votes coming from each depth map. A novel procedure to remove rogue points is proposed that takes into account the visibility information and the matching score of each point. Finally a texture map is built by wisely making use of both the visibility and the view angle informations. Several results show the effectiveness of the algorithm under different working scenarios.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
Image reconstruction of IRAS survey scans
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. Romke
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
3D Reconstruction of Human Motion from Monocular Image Sequences.
Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo
2016-08-01
This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439
Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)
NASA Astrophysics Data System (ADS)
McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian
2006-03-01
To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the
Microwave Imaging for Breast Cancer Detection: Advances in Three–Dimensional Image Reconstruction
Golnabi, Amir H.; Meaney, Paul M.; Epstein, Neil R.; Paulsen, Keith D.
2013-01-01
Microwave imaging is based on the electrical property (permittivity and conductivity) differences in materials. Microwave imaging for biomedical applications is particularly interesting, mainly due to the fact that available range of dielectric properties for different tissues can provide important functional information about their health. Under the assumption that a 3D scattering problem can be reasonably represented as a simplified 2D model, one can take advantage of the simplicity and lower computational cost of 2D models to characterize such 3D phenomenon. Nonetheless, by eliminating excessive model simplifications, 3D microwave imaging provides potentially more valuable information over 2Dtechniques, and as a result, more accurate dielectric property maps may be obtained. In this paper, we present some advances we have made in three–dimensional image reconstruction, and show the results from a 3D breast phantom experiment using our clinical microwave imaging system at Dartmouth Hitchcock Medical Center (DHMC), NH. PMID:22255641
Evolving generalized Voronoi diagrams for accurate cellular image segmentation.
Yu, Weimiao; Lee, Hwee Kuan; Hariharan, Srivats; Bu, Wenyu; Ahmed, Sohail
2010-04-01
Analyzing cellular morphologies on a cell-by-cell basis is vital for drug discovery, cell biology, and many other biological studies. Interactions between cells in their culture environments cause cells to touch each other in acquired microscopy images. Because of this phenomenon, cell segmentation is a challenging task, especially when the cells are of similar brightness and of highly variable shapes. The concept of topological dependence and the maximum common boundary (MCB) algorithm are presented in our previous work (Yu et al., Cytometry Part A 2009;75A:289-297). However, the MCB algorithm suffers a few shortcomings, such as low computational efficiency and difficulties in generalizing to higher dimensions. To overcome these limitations, we present the evolving generalized Voronoi diagram (EGVD) algorithm. Utilizing image intensity and geometric information, EGVD preserves topological dependence easily in both 2D and 3D images, such that touching cells can be segmented satisfactorily. A systematic comparison with other methods demonstrates that EGVD is accurate and much more efficient. PMID:20169588
Imaging, Reconstruction, And Display Of Corneal Topography
NASA Astrophysics Data System (ADS)
Klyce, Stephen D.; Wilson, Steven E.
1989-12-01
The cornea is the major refractive element in the eye; even minor surface distortions can produce a significant reduction in visual acuity. Standard clinical methods used to evaluate corneal shape include keratometry, which assumes the cornea is ellipsoidal in shape, and photokeratoscopy, which images a series of concentric light rings on the corneal surface. These methods fail to document many of the corneal distortions that can degrade visual acuity. Algorithms have been developed to reconstruct the three dimensional shape of the cornea from keratoscope images, and to present these data in the clinically useful display of color-coded contour maps of corneal surface power. This approach has been implemented on a new generation video keratoscope system (Computed Anatomy, Inc.) with rapid automatic digitization of the image rings by a rule-based approach. The system has found clinical use in the early diagnosis of corneal shape anomalies such as keratoconus and contact lens-induced corneal warpage, in the evaluation of cataract and corneal transplant procedures, and in the assessment of corneal refractive surgical procedures. Currently, ray tracing techniques are being used to correlate corneal surface topography with potential visual acuity in an effort to more fully understand the tolerances of corneal shape consistent with good vision and to help determine the site of dysfunction in the visually impaired.
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
Total variation minimization-based multimodality medical image reconstruction
NASA Astrophysics Data System (ADS)
Cui, Xuelin; Yu, Hengyong; Wang, Ge; Mili, Lamine
2014-09-01
Since its recent inception, simultaneous image reconstruction for multimodality fusion has received a great deal of attention due to its superior imaging performance. On the other hand, the compressed sensing (CS)-based image reconstruction methods have undergone a rapid development because of their ability to significantly reduce the amount of raw data. In this work, we combine computed tomography (CT) and magnetic resonance imaging (MRI) into a single CS-based reconstruction framework. From a theoretical viewpoint, the CS-based reconstruction methods require prior sparsity knowledge to perform reconstruction. In addition to the conventional data fidelity term, the multimodality imaging information is utilized to improve the reconstruction quality. Prior information in this context is that most of the medical images can be approximated as piecewise constant model, and the discrete gradient transform (DGT), whose norm is the total variation (TV), can serve as a sparse representation. More importantly, the multimodality images from the same object must share structural similarity, which can be captured by DGT. The prior information on similar distributions from the sparse DGTs is employed to improve the CT and MRI image quality synergistically for a CT-MRI scanner platform. Numerical simulation with undersampled CT and MRI datasets is conducted to demonstrate the merits of the proposed hybrid image reconstruction approach. Our preliminary results confirm that the proposed method outperforms the conventional CT and MRI reconstructions when they are applied separately.
Block-based reconstructions for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Correa, Claudia V.; Arguello, Henry; Arce, Gonzalo R.
2013-05-01
Coded Aperture Snapshot Spectral Imaging system (CASSI) captures spectral information of a scene using a reduced amount of focal plane array (FPA) projections. These projections are highly structured and localized such that each measurement contains information of a small portion of the data cube. Compressed sensing reconstruction algorithms are then used to recover the underlying 3-dimensional (3D) scene. The computational burden to recover a hyperspectral scene in CASSI is overwhelming for some applications such that reconstructions can take hours in desktop architectures. This paper presents a new method to reconstruct a hyperspectral signal from its compressive measurements using several overlapped block reconstructions. This approach exploits the structure of the CASSI sensing matrix to separately reconstruct overlapped regions of the 3D scene. The resultant reconstructions are then assembled to obtain the full recovered data cube. Typically, block-processing causes undesired artifacts in the recovered signal. Vertical and horizontal overlaps between adjacent blocks are then used to avoid these artifacts and increase the quality of reconstructed images. The reconstruction time and the quality of the reconstructed images are calculated as a function of the block-size and the amount of overlapped regions. Simulations show that the quality of the reconstructions is increased up to 6 dB and the reconstruction time is reduced up to 4 times when using block-based reconstruction instead of full data cube recovery at once. The proposed method is suitable for multi-processor architectures in which each core recovers one block at a time.
Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study
NASA Astrophysics Data System (ADS)
Berveglieri, A.; Tommaselli, A. M. G.
2016-06-01
A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.
Current profile reconstruction using X-ray imaging on the PEGASUS toroidal experiment
NASA Astrophysics Data System (ADS)
Tritz, Kevin Lee
Internal plasma profiles, specifically the current profile, are necessary to accurately characterize the plasma equilibrium and perform detailed stability analyses of magnetically confined toroidal plasmas. External magnetic measurements alone are not sufficient to properly constrain the current profile for an equilibrium reconstruction. This work confirms the insensitivity of the profiles to external magnetics and demonstrates the successful incorporation of tangential X-ray imaging into a modified equilibrium code for current profile reconstruction in highly shaped, low aspect-ratio plasmas. An equilibrium reconstruction code was developed that used two dimensional X-ray images to constrain a flexible spline parameterization of the plasma profiles. Image constraint modeling was performed with this code, demonstrating that the profiles were well constrained, with less than 10% deviation of the reconstructed central safety factor, if the image measurement noise was held below 2% for emissivity constraints, and below 1% for intensity constraints. Two tangential soft X-ray pinhole camera imaging systems, a transmissive and reflective phosphor design, were built and operated on the PEGASUS toroidal experiment. Intensity image contours from these systems were used to constrain equilibrium reconstructions of the plasma discharge. The shapes and values of the q profiles determined by these reconstructions correspond well with the presence of coherent MHD activity observed in the plasmas. A comparison of the X-ray intensity-constrained equilibria with the external-magnetics-only reconstructions showed good agreement between most gross plasma parameters, but large variation between the reconstructed profiles. A next generation X-ray imaging system was designed to provide higher sensitivity, a more compact form factor, and multiple time point capability. The increased sensitivity will allow the variance of the experimental reconstructed profiles to achieve the level
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard
2000-04-01
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
Anatomical Brain Images Alone Can Accurately Diagnose Chronic Neuropsychiatric Illnesses
Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S.
2012-01-01
Objective Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. Methods We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. Results In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Conclusions Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these
Terrain reconstruction from Chang'e-3 PCAM images
NASA Astrophysics Data System (ADS)
Wang, Wen-Rui; Ren, Xin; Wang, Fen-Fei; Liu, Jian-Jun; Li, Chun-Lai
2015-07-01
The existing terrain models that describe the local lunar surface have limited resolution and accuracy, which can hardly meet the needs of rover navigation, positioning and geological analysis. China launched the lunar probe Chang'e-3 in December, 2013. Chang'e-3 encompassed a lander and a lunar rover called “Yutu” (Jade Rabbit). A set of panoramic cameras were installed on the rover mast. After acquiring panoramic images of four sites that were explored, the terrain models of the local lunar surface with resolution of 0.02m were reconstructed. Compared with other data sources, the models derived from Chang'e-3 data were clear and accurate enough that they could be used to plan the route of Yutu. Supported by the National Natural Science Foundation of China.
Concurrent image and dose reconstruction for image guided radiation therapy
NASA Astrophysics Data System (ADS)
Sheng, Ke
The importance of knowing the patient actual position is essential for intensity modulated radiation therapy (IMRT). This procedure uses tightened margin and escalated tumor dose. In order to eliminate the uncertainty of the geometry in IMRT, daily imaging is prefered. The imaging dose, limited field of view and the imaging concurrency of the MVCT (mega-voltage computerized tomography) are investigated in this work. By applying partial volume imaging (PVI), imaging dose can be reduced for a region of interest (ROI) imaging. The imaging dose and the image quality are quantitatively balanced with inverse imaging dose planning. With PVI, 72% average imaging dose reduction was observed on a typical prostate patient case. The algebraic reconstruction technique (ART) based projection onto convex sets (POCS) shows higher robustness than filtered back projection when available imaging data is not complete and continuous. However, when the projection is continuous as in the actual delivery, a non-iterative wavelet based multiresolution local tomography (WMLT) is able to achieve 1% accuracy within the ROI. The reduction of imaging dose is dependent on the size of ROI. The improvement of concurrency is also discussed based on the combination of PVI and WMLT. Useful target images were acquired with treatment beams and the temporal resolution can be increased to 20 seconds in tomotherapy. The data truncation problem with the portal imager was also studied. Results show that the image quality is not adversely affected by truncation when WMLT is employed. When the online imaging is available, a perturbation dose calculation (PDC) that estimates the actual delivered dose is proposed. Corrected from the Fano's theorem, PDC counts the first order term in the density variation to calculate the internal and external anatomy change. Although change in the dose distribution that is caused by the internal organ motion is less than 1% for 6 MV beams, the external anatomy change has
NASA Astrophysics Data System (ADS)
Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.
2011-12-01
The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.
Depth-based selective image reconstruction using spatiotemporal image analysis
NASA Astrophysics Data System (ADS)
Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu
1999-03-01
In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.
NASA Astrophysics Data System (ADS)
Barbour, San-Lian S.; Barbour, Randall L.; Koo, Ping C.; Graber, Harry L.; Chang, Jenghwa
1995-05-01
We have computed optical images of the female breast based on analysis of tomographic data obtained from simulated time-independent optical measurements of anatomically accurate maps derived from segmented 3D magnetic resonance (MR) images. Images were segmented according to the measured MR contrast levels for fat and parenchymal tissue from T1 weighted acquisitions. Computed images were obtained from analysis of solutions to the forward problem for breasts containing 'added pathologies', representing tumors, to breasts lacking these inclusions. Both breast size and its optical properties have been examined in tissue. In each case, two small simulated tumors were 'added' to the background issue. Values of absorption and scattering coefficients of the tumors have been examined that are both greater and less than the surrounding tissue. Detector responses and the required imaging operators were computed by numerically solving the diffusion equation for inhomogeneous media. Detectors were distributed uniformly, in a circular fashion, around the breast in a plane positioned parallel and half-way between the chest wall and the nipple. A total of 20 sources were used, and for each 20 detectors. Reconstructed images were obtained by solving a linear perturbation equation derived from transport theory. Three algorithms were tested to solve the perturbation equation and include, the methods of conjugate gradient decent (CGD), projection onto convex sets (POCS), and simultaneous algebraic reconstruction technique (SART). Results obtained showed that in each case, high quality reconstructions were obtained. The computed images correctly resolved and identified the spatial position of the two tumors. Additional studies showed that computed images were stable to large systematic errors in the imaging operators and to added noise. Further, examination of the computed detector readings indicate that images of tissue up to approximately 10 cm in thickness should be possible. The
Numerical modelling and image reconstruction in diffuse optical tomography
Dehghani, Hamid; Srinivasan, Subhadra; Pogue, Brian W.; Gibson, Adam
2009-01-01
The development of diffuse optical tomography as a functional imaging modality has relied largely on the use of model-based image reconstruction. The recovery of optical parameters from boundary measurements of light propagation within tissue is inherently a difficult one, because the problem is nonlinear, ill-posed and ill-conditioned. Additionally, although the measured near-infrared signals of light transmission through tissue provide high imaging contrast, the reconstructed images suffer from poor spatial resolution due to the diffuse propagation of light in biological tissue. The application of model-based image reconstruction is reviewed in this paper, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data. A comprehensive review and details of the basis for using spatial and structural prior information are also discussed, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution. PMID:19581256
Reconstruction of biofilm images: combining local and global structural parameters.
Resat, Haluk; Renslow, Ryan S; Beyenal, Haluk
2014-10-01
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process. PMID:25377487
Undersampled MR Image Reconstruction with Data-Driven Tight Frame
Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong
2015-01-01
Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI) method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI. PMID:26199641
Research on THz CT system and image reconstruction algorithm
NASA Astrophysics Data System (ADS)
Li, Ming-liang; Wang, Cong; Cheng, Hong
2009-07-01
Terahertz Computed Tomography takes the advantages of not only high resolution in space and density without image overlap but also the capability of being directly used in digital processing and spectral analysis, which determine it to be a good choice in parameter detection for process control. But Diffraction and scattering of THz wave will obfuscate or distort the reconstructed image. In order to find the most effective reconstruction method to build THz CT model. Because of the expensive cost, a fan-shaped THz CT industrial detection system scanning model, which consists of 8 emitters and 32 receivers, is established based on studying infrared CT technology. The model contains control and interface, data collecting and image reconstruction sub-system. It analyzes all the sub-function modules then reconstructs images with algebraic reconstruction algorithm. The experimental result proves it to be an effective, efficient algorithm with high resolution and even better than back-projection method.
Reconstruction of biofilm images: combining local and global structural parameters
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-11-07
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.
Evaluation of back projection methods for breast tomosynthesis image reconstruction.
Zhou, Weihua; Lu, Jianping; Zhou, Otto; Chen, Ying
2015-06-01
Breast cancer is the most common cancer among women in the USA. Compared to mammography, digital breast tomosynthesis is a new imaging technique that may improve the diagnostic accuracy by removing the ambiguities of overlapped tissues and providing 3D information of the breast. Tomosynthesis reconstruction algorithms generate 3D reconstructed slices from a few limited angle projection images. Among different reconstruction algorithms, back projection (BP) is considered an important foundation of quite a few reconstruction techniques with deblurring algorithms such as filtered back projection. In this paper, two BP variants, including α-trimmed BP and principal component analysis-based BP, were proposed to improve the image quality against that of traditional BP. Computer simulations and phantom studies demonstrated that the α-trimmed BP may improve signal response performance and suppress noise in breast tomosynthesis image reconstruction. PMID:25384538
Accurate scatter compensation using neural networks in radionuclide imaging
Ogawa, Koichi; Nishizaki, N. . Dept. of Electrical Engineering)
1993-08-01
The paper presents a new method to estimate primary photons using an artificial neural network in radionuclide imaging. The neural network for [sup 99m]Tc had three layers, i.e., one input layer with five units, one hidden layer with five units, and one output layer with two units. As input values to the input units, the authors used count ratios which were the ratios of the counts acquired by narrow windows to the total count acquired by a broad window with the energy range from 125 to 154 keV. The outputs were a scatter count ratio and a primary count ratio. Using the primary count ratio and the total count they calculated the primary count of the pixel directly. The neural network was trained with a back-propagation algorithm using calculated true energy spectra obtained by a Monte Carlo method. The simulation showed that an accurate estimation of primary photons was accomplished within an error ratio of 5% for primary photons.
Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)
NASA Technical Reports Server (NTRS)
Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy
2012-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.
Li, Shu; Chan, Cheong; Stockmann, Jason P.; Tagare, Hemant; Adluru, Ganesh; Tam, Leo K.; Galiana, Gigi; Constable, R. Todd; Kozerke, Sebastian; Peters, Dana C.
2014-01-01
Purpose To investigate algebraic reconstruction technique (ART) for parallel imaging reconstruction of radial data, applied to accelerated cardiac cine. Methods A GPU-accelerated ART reconstruction was implemented and applied to simulations, point spread functions (PSF) and in twelve subjects imaged with radial cardiac cine acquisitions. Cine images were reconstructed with radial ART at multiple undersampling levels (192 Nr x Np = 96 to 16). Images were qualitatively and quantitatively analyzed for sharpness and artifacts, and compared to filtered back-projection (FBP), and conjugate gradient SENSE (CG SENSE). Results Radial ART provided reduced artifacts and mainly preserved spatial resolution, for both simulations and in vivo data. Artifacts were qualitatively and quantitatively less with ART than FBP using 48, 32, and 24 Np, although FBP provided quantitatively sharper images at undersampling levels of 48-24 Np (all p<0.05). Use of undersampled radial data for generating auto-calibrated coil-sensitivity profiles resulted in slightly reduced quality. ART was comparable to CG SENSE. GPU-acceleration increased ART reconstruction speed 15-fold, with little impact on the images. Conclusion GPU-accelerated ART is an alternative approach to image reconstruction for parallel radial MR imaging, providing reduced artifacts while mainly maintaining sharpness compared to FBP, as shown by its first application in cardiac studies. PMID:24753213
Chord-based image reconstruction from clinical projection data
NASA Astrophysics Data System (ADS)
King, Martin; Xia, Dan; Pan, Xiaochuan; Vannier, Michael; Köhler, Thomas; La Riviére, Patrick; Sidky, Emil; Giger, Maryellen
2008-03-01
Chord-based algorithms can eliminate cone-beam artifacts in images reconstructed from a clinical computed tomography (CT) scanner. The feasibility of using chord-based reconstruction algorithms was evaluated with three clinical CT projection data sets. The first projection data set was acquired using a clinical 64-channel CT scanner (Philips Brilliance 64) that consisted of an axial scan from a quality assurance phantom. Images were reconstructed using (1) a full-scan FDK algorithm, (2) a short-scan FDK algorithm, and (3) the chord-based backprojection filtration algorithm (BPF) using full-scan data. The BPF algorithm was capable of reproducing the morphology of the phantom quite well, but exhibited significantly less noise than the two FDK reconstructions as well as the reconstruction obtained from the clinical scanner. The second and third data sets were obtained from scans of a head phantom and a patient's thorax. For both of these data sets, the BPF reconstructions were comparable to the short-scan FDK reconstructions in terms of image quality, although sharper features were indistinct in the BPF reconstructions. This research demonstrates the feasibility of chord-based algorithms for reconstructing images from clinical CT projection data sets and provides a framework for implementing and testing algorithmic innovations.
Image Reconstruction for Diffuse Optical Tomography Based on Radiative Transfer Equation
Han, Bo; Tang, Jinping
2015-01-01
Diffuse optical tomography is a novel molecular imaging technology for small animal studies. Most known reconstruction methods use the diffusion equation (DA) as forward model, although the validation of DA breaks down in certain situations. In this work, we use the radiative transfer equation as forward model which provides an accurate description of the light propagation within biological media and investigate the potential of sparsity constraints in solving the diffuse optical tomography inverse problem. The feasibility of the sparsity reconstruction approach is evaluated by boundary angular-averaged measurement data and internal angular-averaged measurement data. Simulation results demonstrate that in most of the test cases the reconstructions with sparsity regularization are both qualitatively and quantitatively more reliable than those with standard L2 regularization. Results also show the competitive performance of the split Bregman algorithm for the DOT image reconstruction with sparsity regularization compared with other existing L1 algorithms. PMID:25648064
Neural portraits of perception: reconstructing face images from evoked brain activity.
Cowen, Alan S; Chun, Marvin M; Kuhl, Brice A
2014-07-01
Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity. While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex. However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions. Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network. Thus, we investigated (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and (b) whether this could be achieved even when excluding activity within occipital cortex. Our approach involved four steps. (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces. (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces. (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores. (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing 'offline' visual experiences-including dreams, memories, and imagination-which are chiefly represented in higher-level cortical areas. PMID:24650597
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
Quantitative image quality evaluation for cardiac CT reconstructions
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.
2016-03-01
Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.
Iterative image reconstruction techniques: cardiothoracic computed tomography applications.
Cho, Young Jun; Schoepf, U Joseph; Silverman, Justin R; Krazinski, Aleksander W; Canstein, Christian; Deak, Zsuzsanna; Grimm, Jochen; Geyer, Lucas L
2014-07-01
Iterative image reconstruction algorithms provide significant improvements over traditional filtered back projection in computed tomography (CT). Clinically available through recent advances in modern CT technology, iterative reconstruction enhances image quality through cyclical image calculation, suppressing image noise and artifacts, particularly blooming artifacts. The advantages of iterative reconstruction are apparent in traditionally challenging cases-for example, in obese patients, those with significant artery calcification, or those with coronary artery stents. In addition, as clinical use of CT has grown, so have concerns over ionizing radiation associated with CT examinations. Through noise reduction, iterative reconstruction has been shown to permit radiation dose reduction while preserving diagnostic image quality. This approach is becoming increasingly attractive as the routine use of CT for pediatric and repeated follow-up evaluation grows ever more common. Cardiovascular CT in particular, with its focus on detailed structural and functional analyses, stands to benefit greatly from the promising iterative solutions that are readily available. PMID:24662334
Takata, Tadanori; Ichikawa, Katsuhiro; Hayashi, Hiroyuki; Mitsui, Wataru; Sakuta, Keita; Koshida, Haruka; Yokoi, Tomohiro; Matsubara, Kousuke; Horii, Jyunsei; Iida, Hiroji
2012-01-01
The purpose of this study was to evaluate the image quality of an iterative reconstruction method, the iterative reconstruction in image space (IRIS), which was implemented in a 128-slices multi-detector computed tomography system (MDCT), Siemens Somatom Definition Flash (Definition). We evaluated image noise by standard deviation (SD) as many researchers did before, and in addition, we measured modulation transfer function (MTF), noise power spectrum (NPS), and perceptual low-contrast detectability using a water phantom including a low-contrast object with a 10 Hounsfield unit (HU) contrast, to evaluate whether the noise reduction of IRIS was effective. The SD and NPS were measured from the images of a water phantom. The MTF was measured from images of a thin metal wire and a bar pattern phantom with the bar contrast of 125 HU. The NPS of IRIS was lower than that of filtered back projection (FBP) at middle and high frequency regions. The SD values were reduced by 21%. The MTF of IRIS and FBP measured by the wire phantom coincided precisely. However, for the bar pattern phantom, the MTF values of IRIS at 0.625 and 0.833 cycle/mm were lower than those of FBP. Despite the reduction of the SD and the NPS, the low-contrast detectability study indicated no significant difference between IRIS and FBP. From these results, it was demonstrated that IRIS had the noise reduction performance with exact preservation for high contrast resolution and slight degradation of middle contrast resolution, and could slightly improve the low contrast detectability but with no significance. PMID:22516592
SART-Type Image Reconstruction from Overlapped Projections
Yu, Hengyong; Ji, Changguo; Wang, Ge
2011-01-01
To maximize the time-integrated X-ray flux from multiple X-ray sources and shorten the data acquisition process, a promising way is to allow overlapped projections from multiple sources being simultaneously on without involving the source multiplexing technology. The most challenging task in this configuration is to perform image reconstruction effectively and efficiently from overlapped projections. Inspired by the single-source simultaneous algebraic reconstruction technique (SART), we hereby develop a multisource SART-type reconstruction algorithm regularized by a sparsity-oriented constraint in the soft-threshold filtering framework to reconstruct images from overlapped projections. Our numerical simulation results verify the correctness of the proposed algorithm and demonstrate the advantage of image reconstruction from overlapped projections. PMID:20871854
NASA Astrophysics Data System (ADS)
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
Iterative Image Reconstruction for Limited-Angle CT Using Optimized Initial Image
Guo, Jingyu; Qi, Hongliang; Xu, Yuan; Chen, Zijia; Li, Shulong; Zhou, Linghong
2016-01-01
Limited-angle computed tomography (CT) has great impact in some clinical applications. Existing iterative reconstruction algorithms could not reconstruct high-quality images, leading to severe artifacts nearby edges. Optimal selection of initial image would influence the iterative reconstruction performance but has not been studied deeply yet. In this work, we proposed to generate optimized initial image followed by total variation (TV) based iterative reconstruction considering the feature of image symmetry. The simulated data and real data reconstruction results indicate that the proposed method effectively removes the artifacts nearby edges. PMID:27066107
Sparsity-constrained PET image reconstruction with learned dictionaries.
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging. PMID:27494441
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Infrared Astronomical Satellite (IRAS) image reconstruction and restoration
NASA Technical Reports Server (NTRS)
Gonsalves, R. A.; Lyons, T. D.; Price, S. D.; Levan, P. D.; Aumann, H. H.
1987-01-01
IRAS sky mapping data is being reconstructed as images, and an entropy-based restoration algorithm is being applied in an attempt to improve spatial resolution in extended sources. Reconstruction requires interpolation of non-uniformly sampled data. Restoration is accomplished with an iterative algorithm which begins with an inverse filter solution and iterates on it with a weighted entropy-based spectral subtraction.
Fast reconstruction of digital tomosynthesis using on-board images
Yan Hui; Godfrey, Devon J.; Yin Fangfang
2008-05-15
Digital tomosynthesis (DTS) is a method to reconstruct pseudo three-dimensional (3D) volume images from two-dimensional x-ray projections acquired over limited scan angles. Compared with cone-beam computed tomography, which is frequently used for 3D image guided radiation therapy, DTS requires less imaging time and dose. Successful implementation of DTS for fast target localization requires the reconstruction process to be accomplished within tight clinical time constraints (usually within 2 min). To achieve this goal, substantial improvement of reconstruction efficiency is necessary. In this study, a reconstruction process based upon the algorithm proposed by Feldkamp, Davis, and Kress was implemented on graphics hardware for the purpose of acceleration. The performance of the novel reconstruction implementation was tested for phantom and real patient cases. The efficiency of DTS reconstruction was improved by a factor of 13 on average, without compromising image quality. With acceleration of the reconstruction algorithm, the whole DTS generation process including data preprocessing, reconstruction, and DICOM conversion is accomplished within 1.5 min, which ultimately meets clinical requirement for on-line target localization.
Image-based reconstruction of 3D myocardial infarct geometry for patient specific applications
NASA Astrophysics Data System (ADS)
Ukwatta, Eranga; Rajchl, Martin; White, James; Pashakhanloo, Farhad; Herzka, Daniel A.; McVeigh, Elliot; Lardo, Albert C.; Trayanova, Natalia; Vadakkumpadan, Fijoy
2015-03-01
Accurate reconstruction of the three-dimensional (3D) geometry of a myocardial infarct from two-dimensional (2D) multi-slice image sequences has important applications in the clinical evaluation and treatment of patients with ischemic cardiomyopathy. However, this reconstruction is challenging because the resolution of common clinical scans used to acquire infarct structure, such as short-axis, late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, is low, especially in the out-of-plane direction. In this study, we propose a novel technique to reconstruct the 3D infarct geometry from low resolution clinical images. Our methodology is based on a function called logarithm of odds (LogOdds), which allows the broader class of linear combinations in the LogOdds vector space as opposed to being limited to only a convex combination in the binary label space. To assess the efficacy of the method, we used high-resolution LGE-CMR images of 36 human hearts in vivo, and 3 canine hearts ex vivo. The infarct was manually segmented in each slice of the acquired images, and the manually segmented data were downsampled to clinical resolution. The developed method was then applied to the downsampled image slices, and the resulting reconstructions were compared with the manually segmented data. Several existing reconstruction techniques were also implemented, and compared with the proposed method. The results show that the LogOdds method significantly outperforms all the other tested methods in terms of region overlap.
NASA Astrophysics Data System (ADS)
Liu, Qiu-hong; Shu, Fan; Zhang, Wen-kun; Cai, Ai-long; Li, Lei; Yan, Bin
2013-08-01
Linear scan Computed Tomography (LCT) has emerged as a promising technique in fields like industrial scanning and security inspection due to its straight-line source trajectory and high scanning speed. However, in practical applications of LCT, the ordinary algorithms suffer from serious artifacts owing to the limited-angle and insufficient data. In this paper, a new method which reconstructs image from partial Fourier data sampled in pseudo polar grid based on alternating direction anisometric total variation minimization has been proposed. The main idea is to reform the image reconstruction problem into solving an under-determined linear equation, and then reconstruct image by applying the popular total variation (TV) minimization to reform an unconstraint optimization by means of augmented Lagrange method and using the alternating minimization method of multiplier (ADMM) which contributes to the fast convergence. The proposed method is practical in the large-scale task of reconstruction due to its algorithmic simplicity and computational efficiency and reconstructs better images. The results of the numerical simulations and pseudo real data reconstructions from the linear scan validate that the proposed method is both efficient and accurate.
NASA Astrophysics Data System (ADS)
Wahbeh, W.; Nebiker, S.; Fangi, G.
2016-06-01
This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.
NASA Astrophysics Data System (ADS)
Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei
2015-01-01
Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.
Chen, Zikuan; Calhoun, Vince
2013-01-01
The magnetic field resulting from material magnetization in magnetic resonance imaging (MRI) has an object orientation effect, which produces an orientation dependence for acquired T2* images. On one hand, the orientation effect can be exploited for object anisotropy investigation (via multi-angle imaging); on the other hand, it is desirable to remove the orientation dependence using magnetic susceptibility reconstruction. In this report, we design a stick-star digital phantom to simulate multiple orientations of a stick-like object and use it to conduct various numerical simulations. Our simulations show that the object orientation effect is not propagated to the reconstructed magnetic susceptibility distribution. This suggests that accurate susceptibility reconstruction methods should be largely orientation independent. PMID:25114542
NASA Astrophysics Data System (ADS)
Johnson, Jami L.; Shragge, Jeffrey; van Wijk, Kasper
2015-03-01
We propose a new reconstruction algorithm for photoacoustic and laser-ultrasound imaging based on reverse time migration (RTM), a time reversal imaging algorithm originally developed for exploration seismology. RTM inherently handles strong velocity heterogeneity and complex propagation paths. A successful RTM analysis with appropriate handling of boundary conditions results in enhanced signal-to-noise, accurately located structures, and minimal artifacts. A laser-ultrasound experiment begins with a source wave field generated at the surface that propagates through the sample. Acoustic scatterers in the propagation path give rise to a scattered wave field, which travels to the surface and is recorded by acoustic detectors. To reconstruct the laser-ultrasound image, a synthetic source function is forward propagated and cross-correlated with the time-reversed and back-propagated recorded (scattered) wave field to image the scatterers at the correct location. Conversely, photoacoustic waves are generated by chromophores within the sample and propagate "one-way" to the detection surface. We utilize the velocity model validated by the laser-ultrasound reconstruction to accurately reconstruct the photoacoustic image with RTM. This approach is first validated with simulations, where inclusions behave both as a photoacoustic source and an acoustic scatterer. Subsequently, we demonstrate the capabilities of RTM with tissue phantom experiments using an all-optical, multi-channel acquisition geometry.
Compressed sensing sparse reconstruction for coherent field imaging
NASA Astrophysics Data System (ADS)
Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen
2016-04-01
Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).
Compressed Sensing Inspired Image Reconstruction from Overlapped Projections
Yang, Lin; Lu, Yang; Wang, Ge
2010-01-01
The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP) algorithms) cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS-) based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV). Then, we demonstrated the feasibility of this algorithm in numerical tests. PMID:20689701
Online reconstruction of 3D magnetic particle imaging data
NASA Astrophysics Data System (ADS)
Knopp, T.; Hofmann, M.
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.
Online reconstruction of 3D magnetic particle imaging data.
Knopp, T; Hofmann, M
2016-06-01
Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668
Exponential filtering of singular values improves photoacoustic image reconstruction.
Bhatt, Manish; Gutta, Sreedevi; Yalavarthy, Phaneendra K
2016-09-01
Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering. PMID:27607501
Reconstruction algorithms for optoacoustic imaging based on fiber optic detectors
NASA Astrophysics Data System (ADS)
Lamela, Horacio; Díaz-Tendero, Gonzalo; Gutiérrez, Rebeca; Gallego, Daniel
2011-06-01
Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.
Reconstructing the shape of an object from its mirror image
NASA Astrophysics Data System (ADS)
Hutt, T.; Simonetti, F.
2010-09-01
An image of an object can be achieved by sending multiple waves toward it and recording the reflections. In order to achieve a complete reconstruction it is usually necessary to send and receive waves from every possible direction [360° for two-dimensional (2D) imaging]. In practice this is often not possible and imaging must be performed with a limited view, which degrades the reconstruction. A proposed solution is to use a strongly scattering planar interface as a mirror to "look behind" the object. The mirror provides additional views that result in an improved reconstruction. We describe this technique and how it is implemented in the context of 2D acoustic imaging. The effect of the mirror on imaging is demonstrated by means of numerical examples that are also used to study the effects of noise. This technique could be used with many imaging methods and wave types, including microwaves, ultrasound, sonar, and seismic waves.
Three-dimensional surface reconstruction from multistatic SAR images.
Rigling, Brian D; Moses, Randolph L
2005-08-01
This paper discusses reconstruction of three-dimensional surfaces from multiple bistatic synthetic aperture radar (SAR) images. Techniques for surface reconstruction from multiple monostatic SAR images already exist, including interferometric processing and stereo SAR. We generalize these methods to obtain algorithms for bistatic interferometric SAR and bistatic stereo SAR. We also propose a framework for predicting the performance of our multistatic stereo SAR algorithm, and, from this framework, we suggest a metric for use in planning strategic deployment of multistatic assets. PMID:16121463
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
The speckle image reconstruction of the solar small scale features
NASA Astrophysics Data System (ADS)
Zhong, Libo; Tian, Yu; Rao, Changhui
2014-11-01
The resolution of the astronomical object observed by the earth-based telescope is limited due to the atmospheric turbulence. Speckle image reconstruction method provides access to detect small-scale solar features near the diffraction limit of the telescope. This paper describes the implementation of the reconstruction of images obtained by the 1-m new vacuum solar telescope at Full-Shine solar observatory. Speckle masking method is used to reconstruct the Fourier phases for its better dynamic range and resolution capabilities. Except of the phase reconstruction process, several problems encounter in the solar image reconstruction are discussed. The details of the implement including the flat-field, image segmentation, Fried parameter estimation and noise filter estimating are described particularly. It is demonstrated that the speckle image reconstruction is effective to restore the wide field of view images. The qualities of the restorations are evaluated by the contrast ratio. When the Fried parameter is 10cm, the contrast ratio of the sunspot and granulation can be improved from 0.3916 to 0.6845 and from 0.0248 to 0.0756 respectively.
Image Alignment for Tomography Reconstruction from Synchrotron X-Ray Microscopic Images
Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai
2014-01-01
A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the “projected feature points” in the sequence of images. The matched projected feature points in the - plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx. PMID:24416264
Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.
Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai
2014-01-01
A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx. PMID:24416264
Ren, Maodong; Liang, Jin; Li, Leigang; Wei, Bin; Wang, Lizhong; Tang, Zhengzong
2015-07-01
Based on stereomicroscope and three-dimensional (3D) digital image correlation (DIC) method, a non-contact measurement technique is presented to measure the 3D shape and deformation data on miniature specimens and the corresponding microscopic measurement system is developed. A pair of cameras is mounted on a binocular stereo light microscope to acquire pairing micrographs from two different optical paths of a specimen surface spraying with speckle pattern. Considering complex optical paths and high magnification, an accurate equivalent relative calibration method, combining a priori warping functions, is proposed to correct image distortions and optimize the intrinsic and extrinsic parameters of stereomicroscope. Then, a fast one-dimensional synchronous stereo matching method, based on the DIC method and image rectification technique, is proposed to search for discontinuous corresponding points in the pairing micrographs. Finally, the 3D shape is reconstructed from the corresponding points, while the temporal micrographs acquired before and after deformation are employed to determine the full-field deformation. The effectiveness and accuracy of the presented microscale measurement technique are verified by a series of experiments. PMID:26233412
NASA Astrophysics Data System (ADS)
Park, Sang Joon; Kim, Tae Jung; Kim, Kwang Gi; Lee, Sang Ho; Goo, Jin Mo; Kim, Jong Hyo
2008-03-01
Airway wall thickness (AWT) is an important bio-marker for evaluation of pulmonary diseases such as chronic bronchitis, bronchiectasis. While an image-based analysis of the airway tree can provide precise and valuable airway size information, quantitative measurement of AWT in Multidetector-Row Computed Tomography (MDCT) images involves various sources of error and uncertainty. So we have developed an accurate AWT measurement technique for small airways with three-dimensional (3-D) approach. To evaluate performance of these techniques, we used a set of acryl tube phantom was made to mimic small airways to have three different sizes of wall diameter (4.20, 1.79, 1.24 mm) and wall thickness (1.84, 1.22, 0.67 mm). The phantom was imaged with MDCT using standard reconstruction kernel (Sensation 16, Siemens, Erlangen). The pixel size was 0.488 mm × 0.488 mm × 0.75 mm in x, y, and z direction respectively. The images were magnified in 5 times using cubic B-spline interpolation, and line profiles were obtained for each tube. To recover faithful line profile from the blurred images, the line profiles were deconvolved with a point spread kernel of the MDCT which was estimated using the ideal tube profile and image line profile. The inner diameter, outer diameter, and wall thickness of each tube were obtained with full-width-half-maximum (FWHM) method for the line profiles before and after deconvolution processing. Results show that significant improvement was achieved over the conventional FWHM method in the measurement of AWT.
A novel building boundary reconstruction method based on lidar data and images
NASA Astrophysics Data System (ADS)
Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian
2013-09-01
Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.
Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging
Sheen, David M.; Hall, Thomas E.
2014-06-09
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm
NASA Astrophysics Data System (ADS)
Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.
2015-03-01
Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.
NASA Astrophysics Data System (ADS)
Wang, A. S.; Stayman, J. W.; Otake, Y.; Khanna, A. J.; Gallia, G. L.; Siewerdsen, J. H.
2014-03-01
Purpose: A new method for accurately portraying the impact of low-dose imaging techniques in C-arm cone-beam CT (CBCT) is presented and validated, allowing identification of minimum-dose protocols suitable to a given imaging task on a patient-specific basis in scenarios that require repeat intraoperative scans. Method: To accurately simulate lower-dose techniques and account for object-dependent noise levels (x-ray quantum noise and detector electronics noise) and correlations (detector blur), noise of the proper magnitude and correlation was injected into the projections from an initial CBCT acquired at the beginning of a procedure. The resulting noisy projections were then reconstructed to yield low-dose preview (LDP) images that accurately depict the image quality at any level of reduced dose in both filtered backprojection and statistical image reconstruction. Validation studies were conducted on a mobile C-arm, with the noise injection method applied to images of an anthropomorphic head phantom and cadaveric torso across a range of lower-dose techniques. Results: Comparison of preview and real CBCT images across a full range of techniques demonstrated accurate noise magnitude (within ~5%) and correlation (matching noise-power spectrum, NPS). Other image quality characteristics (e.g., spatial resolution, contrast, and artifacts associated with beam hardening and scatter) were also realistically presented at all levels of dose and across reconstruction methods, including statistical reconstruction. Conclusion: Generating low-dose preview images for a broad range of protocols gives a useful method to select minimum-dose techniques that accounts for complex factors of imaging task, patient-specific anatomy, and observer preference. The ability to accurately simulate the influence of low-dose acquisition in statistical reconstruction provides an especially valuable means of identifying low-dose limits in a manner that does not rely on a model for the nonlinear
Fuzzy-rule-based image reconstruction for positron emission tomography
NASA Astrophysics Data System (ADS)
Mondal, Partha P.; Rajan, K.
2005-09-01
Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.
Sparse-Coding-Based Computed Tomography Image Reconstruction
Yoon, Gang-Joon
2013-01-01
Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Bayesian 2D Current Reconstruction from Magnetic Images
NASA Astrophysics Data System (ADS)
Clement, Colin B.; Bierbaum, Matthew K.; Nowack, Katja; Sethna, James P.
We employ a Bayesian image reconstruction scheme to recover 2D currents from magnetic flux imaged with scanning SQUIDs (Superconducting Quantum Interferometric Devices). Magnetic flux imaging is a versatile tool to locally probe currents and magnetic moments, however present reconstruction methods sacrifice resolution due to numerical instability. Using state-of-the-art blind deconvolution techniques we recover the currents, point-spread function and height of the SQUID loop by optimizing the probability of measuring an image. We obtain uncertainties on these quantities by sampling reconstructions. This generative modeling technique could be used to develop calibration protocols for scanning SQUIDs, to diagnose systematic noise in the imaging process, and can be applied to many tools beyond scanning SQUIDs.
Compensation for air voids in photoacoustic computed tomography image reconstruction
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Li, Lei; Wang, Lihong V.; Anastasio, Mark A.
2016-03-01
Most image reconstruction methods in photoacoustic computed tomography (PACT) assume that the acoustic properties of the object and the surrounding medium are homogeneous. This can lead to strong artifacts in the reconstructed images when there are significant variations in sound speed or density. Air voids represent a particular challenge due to the severity of the differences between the acoustic properties of air and water. In whole-body small animal imaging, the presence of air voids in the lungs, stomach, and gastrointestinal system can limit image quality over large regions of the object. Iterative reconstruction methods based on the photoacoustic wave equation can account for these acoustic variations, leading to improved resolution, improved contrast, and a reduction in the number of imaging artifacts. However, the strong acoustic heterogeneities can lead to instability or errors in the numerical wave solver. Here, the impact of air voids on PACT image reconstruction is investigated, and procedures for their compensation are proposed. The contributions of sound speed and density variations to the numerical stability of the wave solver are considered, and a novel approach for mitigating the impact of air voids while reducing the computational burden of image reconstruction is identified. These results are verified by application to an experimental phantom.
Electrophysiology Catheter Detection and Reconstruction From Two Views in Fluoroscopic Images.
Hoffmann, Matthias; Brost, Alexander; Koch, Martin; Bourier, Felix; Maier, Andreas; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim
2016-02-01
Electrophysiology (EP) studies and catheter ablation have become important treatment options for several types of cardiac arrhythmias. We present a novel image-based approach for automatic detection and 3-D reconstruction of EP catheters where the physician marks the catheter to be reconstructed by a single click in each image. The result can be used to provide 3-D information for enhanced navigation throughout EP procedures. Our approach involves two X-ray projections acquired from different angles, and it is based on two steps: First, we detect the catheter in each view after manual initialization using a graph-search method. Then, the detection results are used to reconstruct a full 3-D model of the catheter based on automatically determined point pairs for triangulation. An evaluation on 176 different clinical fluoroscopic images yielded a detection rate of 83.4%. For measuring the error, we used the coupling distance which is a more accurate quality measure than the average point-wise distance to a reference. For successful outcomes, the 2-D detection error was 1.7 mm ±1.2 mm. Using successfully detected catheters for reconstruction, we obtained a reconstruction error of 1.8 mm ±1.1 mm on phantom data. On clinical data, our method yielded a reconstruction error of 2.2 mm ±2.2 mm. PMID:26441411
Hansen, Hendrik H G; Richards, Michael S; Doyley, Marvin M; de Korte, Chris L
2013-01-01
Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF) data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2-3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding. PMID:23478602
Hansen, Hendrik H.G.; Richards, Michael S.; Doyley, Marvin M.; de Korte, Chris L.
2013-01-01
Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF) data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2–3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding. PMID:23478602
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more
Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity
Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
Constrained TV-minimization image reconstruction for industrial CT system
NASA Astrophysics Data System (ADS)
Chen, Buxin; Yang, Min; Zhang, Zheng; Bian, Junguo; Han, Xiao; Sidky, Emil; Pan, Xiaochuan
2014-02-01
In this work, we investigate the applicability of the constrained total-variation (TV)-minimization reconstruction method to industrial CT system. In general, industrial CT systems have the same principles of imaging process with clinical CT systems, but different imaging objectives and evaluation metrics. Optimization-based image reconstruction methods have been actively developed to meet practical challenges and extensively tested for clinical CT systems. However, the utility of optimization-based reconstruction methods is task-specific and not necessarily transferrable among different tasks. In this work, we adopt constrained TV-minimization programs together with adaptive-steepest-descent-projection-ontoconvex-sets (ASD-POCS) algorithm for reconstructing images from data of a concrete sample collected using a laboratory industrial CT system developed for non-destructive evaluation. Our results, compared to those reconstructed from FBPbased algorithm, suggest that the constrained TV-minimization program combined with ASD-POCS algorithm can yield images with comparable or improved visual quality and achieve equivalent or better imaging objectives over the currently used FBP-based algorithm under dense sampling data condition.
Compressed sensing MR image reconstruction exploiting TGV and wavelet sparsity.
Zhao, Di; Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Lin, Yu; Liao, Ning-fang; Luo, Yong-dao; Cui, De-qi; Tan, Bo-neng; Wu, Wen-min
2010-08-01
In the present paper, the authors will introduce our research on spectral reconstruction of Fourier transform computed tomography imaging spectrometer by means of the algebraic reconstruction technology. A simulation experiment was carried out to demonstrate the algorithm. The spatial similarities and spectral similarities were evaluated using the normalized correlation coefficient. The performance of ART was evaluated when the quantity of projection is 45. In that case, filter back projection can't work well. Actual spectral slices were reconstructed by using ART in the last part of this paper. PMID:20939312
SPECT data acquisition and image reconstruction in a stationary small animal SPECT/MRI system
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Chen, Si; Yu, Jianhua; Meier, Dirk; Wagenaar, Douglas J.; Patt, Bradley E.; Tsui, Benjamin M. W.
2010-04-01
The goal of the study was to investigate data acquisition strategies and image reconstruction methods for a stationary SPECT insert that can operate inside an MRI scanner with a 12 cm bore diameter for simultaneous SPECT/MRI imaging of small animals. The SPECT insert consists of 3 octagonal rings of 8 MR-compatible CZT detectors per ring surrounding a multi-pinhole (MPH) collimator sleeve. Each pinhole is constructed to project the field-of-view (FOV) to one CZT detector. All 24 pinholes are focused to a cylindrical FOV of 25 mm in diameter and 34 mm in length. The data acquisition strategies we evaluated were optional collimator rotations to improve tomographic sampling; and the image reconstruction methods were iterative ML-EM with and without compensation for the geometric response function (GRF) of the MPH collimator. For this purpose, we developed an analytic simulator that calculates the system matrix with the GRF models of the MPH collimator. The simulator was used to generate projection data of a digital rod phantom with pinhole aperture sizes of 1 mm and 2 mm and with different collimator rotation patterns. Iterative ML-EM reconstruction with and without GRF compensation were used to reconstruct the projection data from the central ring of 8 detectors only, and from all 24 detectors. Our results indicated that without GRF compensation and at the default design of 24 projection views, the reconstructed images had significant artifacts. Accurate GRF compensation substantially improved the reconstructed image resolution and reduced image artifacts. With accurate GRF compensation, useful reconstructed images can be obtained using 24 projection views only. This last finding potentially enables dynamic SPECT (and/or MRI) studies in small animals, one of many possible application areas of the SPECT/MRI system. Further research efforts are warranted including experimentally measuring the system matrix for improved geometrical accuracy, incorporating the co
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
Probe and object function reconstruction in incoherent stem imaging
Nellist, P.D.; Pennycook, S.J.
1996-09-01
Using the phase-object approximation it is shown how an annular dark- field (ADF) detector in a scanning transmission electron microscope (STEM) leads to an image which can be described by an incoherent model. The point spread function is found to be simply the illuminating probe intensity. An important consequence of this is that there is no phase problem in the imaging process, which allows various image processing methods to be applied directly to the image intensity data. Using an image of a GaAs<110>, the probe intensity profile is reconstructed, confirming the existence of a 1.3 {Angstrom} probe in a 300kV STEM. It is shown that simply deconvolving this reconstructed probe from the image data does not improve its interpretability because the dominant effects of the imaging process arise simply from the restricted resolution of the microscope. However, use of the reconstructed probe in a maximum entropy reconstruction is demonstrated, which allows information beyond the resolution limit to be restored and does allow improved image interpretation.
Cervigram image segmentation based on reconstructive sparse representations
NASA Astrophysics Data System (ADS)
Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris
2010-03-01
We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.
Reconstruction of indoor scene from a single image
NASA Astrophysics Data System (ADS)
Wu, Di; Li, Hongyu; Zhang, Lin
2015-03-01
Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Improving JWST Coronagraphic Performance with Accurate Image Registration
NASA Astrophysics Data System (ADS)
Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group
2016-06-01
The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.
Reconstruction of a cone-beam CT image via forward iterative projection matching
Brock, R. Scott; Docef, Alen; Murphy, Martin J.
2010-12-15
Purpose: To demonstrate the feasibility of reconstructing a cone-beam CT (CBCT) image by deformably altering a prior fan-beam CT (FBCT) image such that it matches the anatomy portrayed in the CBCT projection data set. Methods: A prior FBCT image of the patient is assumed to be available as a source image. A CBCT projection data set is obtained and used as a target image set. A parametrized deformation model is applied to the source FBCT image, digitally reconstructed radiographs (DRRs) that emulate the CBCT projection image geometry are calculated and compared to the target CBCT projection data, and the deformation model parameters are adjusted iteratively until the DRRs optimally match the CBCT projection data set. The resulting deformed FBCT image is hypothesized to be an accurate representation of the patient's anatomy imaged by the CBCT system. The process is demonstrated via numerical simulation. A known deformation is applied to a prior FBCT image and used to create a synthetic set of CBCT target projections. The iterative projection matching process is then applied to reconstruct the deformation represented in the synthetic target projections; the reconstructed deformation is then compared to the known deformation. The sensitivity of the process to the number of projections and the DRR/CBCT projection mismatch is explored by systematically adding noise to and perturbing the contrast of the target projections relative to the iterated source DRRs and by reducing the number of projections. Results: When there is no noise or contrast mismatch in the CBCT projection images, a set of 64 projections allows the known deformed CT image to be reconstructed to within a nRMS error of 1% and the known deformation to within a nRMS error of 7%. A CT image nRMS error of less than 4% is maintained at noise levels up to 3% of the mean projection intensity, at which the deformation error is 13%. At 1% noise level, the number of projections can be reduced to 8 while maintaining
A measurement system and image reconstruction in magnetic induction tomography.
Vauhkonen, M; Hamsch, M; Igney, C H
2008-06-01
Magnetic induction tomography (MIT) is a technique for imaging the internal conductivity distribution of an object. In MIT current-carrying coils are used to induce eddy currents in the object and the induced voltages are sensed with other coils. From these measurements, the internal conductivity distribution of the object can be reconstructed. In this paper, we introduce a 16-channel MIT measurement system that is capable of parallel readout of 16 receiver channels. The parallel measurements are carried out using high-quality audio sampling devices. Furthermore, approaches for reconstructing MIT images developed for the 16-channel MIT system are introduced. We consider low conductivity applications, conductivity less than 5 S m(-1), and we use a frequency of 10 MHz. In the image reconstruction, we use time-harmonic Maxwell's equation for the electric field. This equation is solved with the finite element method using edge elements and the images are reconstructed using a generalized Tikhonov regularization approach. Both difference and static image reconstruction approaches are considered. Results from simulations and real measurements collected with the Philips 16-channel MIT system are shown. PMID:18544825
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-01-01
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-05-15
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
Krings, Thomas; Mauerhofer, Eric
2011-06-01
This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution. PMID:21353575
Total variation superiorization schemes in proton computed tomography image reconstruction
Penfold, S. N.; Schulte, R. W.; Censor, Y.; Rosenfeld, A. B.
2010-01-01
Purpose: Iterative projection reconstruction algorithms are currently the preferred reconstruction method in proton computed tomography (pCT). However, due to inconsistencies in the measured data arising from proton energy straggling and multiple Coulomb scattering, the noise in the reconstructed image increases with successive iterations. In the current work, the authors investigated the use of total variation superiorization (TVS) schemes that can be applied as an algorithmic add-on to perturbation-resilient iterative projection algorithms for pCT image reconstruction. Methods: The block-iterative diagonally relaxed orthogonal projections (DROP) algorithm was used for reconstructing GEANT4 Monte Carlo simulated pCT data sets. Two TVS schemes added on to DROP were investigated; the first carried out the superiorization steps once per cycle and the second once per block. Simplifications of these schemes, involving the elimination of the computationally expensive feasibility proximity checking step of the TVS framework, were also investigated. The modulation transfer function and contrast discrimination function were used to quantify spatial and density resolution, respectively. Results: With both TVS schemes, superior spatial and density resolution was achieved compared to the standard DROP algorithm. Eliminating the feasibility proximity check improved the image quality, in particular image noise, in the once-per-block superiorization, while also halving image reconstruction time. Overall, the greatest image quality was observed when carrying out the superiorization once per block and eliminating the feasibility proximity check. Conclusions: The low-contrast imaging made possible with TVS holds a promise for its incorporation into future pCT studies. PMID:21158301
Motion compensation for PET image reconstruction using deformable tetrahedral meshes
NASA Astrophysics Data System (ADS)
Manescu, P.; Ladjal, H.; Azencot, J.; Beuve, M.; Shariat, B.
2015-12-01
Respiratory-induced organ motion is a technical challenge to PET imaging. This motion induces displacements and deformation of the organs tissues, which need to be taken into account when reconstructing the spatial radiation activity. Classical image-based methods that describe motion using deformable image registration (DIR) algorithms cannot fully take into account the non-reproducibility of the respiratory internal organ motion nor the tissue volume variations that occur during breathing. In order to overcome these limitations, various biomechanical models of the respiratory system have been developed in the past decade as an alternative to DIR approaches. In this paper, we describe a new method of correcting motion artefacts in PET image reconstruction adapted to motion estimation models such as those based on the finite element method. In contrast with the DIR-based approaches, the radiation activity was reconstructed on deforming tetrahedral meshes. For this, we have re-formulated the tomographic reconstruction problem by introducing a time-dependent system matrix based calculated using tetrahedral meshes instead of voxelized images. The MLEM algorithm was chosen as the reconstruction method. The simulations performed in this study show that the motion compensated reconstruction based on tetrahedral deformable meshes has the capability to correct motion artefacts. Results demonstrate that, in the case of complex deformations, when large volume variations occur, the developed tetrahedral based method is more appropriate than the classical DIR-based one. This method can be used, together with biomechanical models controlled by external surrogates, to correct motion artefacts in PET images and thus reducing the need for additional internal imaging during the acquisition.
NASA Astrophysics Data System (ADS)
Jones, Jasmine; Zhang, Rui; Heins, David; Castle, Katherine
In postmastectomy radiotherapy, an increasing number of patients have tissue expanders inserted subpectorally when receiving immediate breast reconstruction. These tissue expanders are composed of silicone and are inflated with saline through an internal metallic port; this serves the purpose of stretching the muscle and skin tissue over time, in order to house a permanent implant. The issue with administering radiation therapy in the presence of a tissue expander is that the port's magnetic core can potentially perturb the dose delivered to the Planning Target Volume, causing significant artifacts in CT images. Several studies have explored this problem, and suggest that density corrections must be accounted for in treatment planning. However, very few studies accurately calibrated commercial TP systems for the high density material used in the port, and no studies employed fusion imaging to yield a more accurate contour of the port in treatment planning. We compared depth dose values in the water phantom between measurement and TPS calculations, and we were able to overcome some of the inhomogeneities presented by the image artifact by fusing the KVCT and MVCT images of the tissue expander together, resulting in a more precise comparison of dose calculations at discrete locations. We expect this method to be pivotal in the quantification of dose distribution in the PTV. Research funded by the LS-AMP Award.
Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis
Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor
2011-10-15
Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 {+-} 0.44 mm (Mean {+-} STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 {+-} 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use.
The concept of causality in image reconstruction
Llacer, J.; Veklerov, E.; Nunez, J.
1988-09-01
Causal images in emission tomography are defined as those which could have generated the data by the statistical process that governs the physics of the measurement. The concept of causality was previously applied to deciding when to stop the MLE iterative procedure in PET. The present paper further explores the concept, indicates the difficulty of carrying out a correct hypothesis testing for causality, discusses the assumption needed to justify the tests proposed and discusses a possible methodology for a justification of that assumption. The paper also describes several methods that we have found to generate causal images and it shows that the set of causal images is rather large. This set includes images judged to be superior to the best maximum likelihood images, but it also includes unacceptable and noisy images. The paper concludes by proposing to use causality as a constraint in optimization problems. 16 refs., 5 figs.
An accurate test for acute appendicitis: In-111 WBC imaging
Navarro, D.A.; Weber, P.M.; Kang, I.Y.; dosRemedios, L.V.; Jasko, I.A.
1985-05-01
The decision to operate when acute appendicitis (APPY) is suspected is often difficult. Surgeons accept up to a 20% false positive rate to avoid any delay that may result in appendiceal rupture and peritonitis. The authors have successfully improved early diagnostic accuracy by using abdominal imaging beginning 2 hours after injecting In-111 labeled WBC. Patients with clear-cut (APPY) had laparotomy and were not studied. Those who were to be observed in the ER for possible (APPY) had their leukocytes harvested, labeled with In-111 oxine, and reinjected. Abnormal localized activity in the right lower quadrant (RLQ) imaged at 2 hours was graded relative to bone marrow activity (8M): 0, 1+
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Reconstructing irregularly sampled images by neural networks
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Yellott, John I., Jr.
1989-01-01
Neural-network-like models of receptor position learning and interpolation function learning are being developed as models of how the human nervous system might handle the problems of keeping track of the receptor positions and interpolating the image between receptors. These models may also be of interest to designers of image processing systems desiring the advantages of a retina-like image sampling array.
NASA Astrophysics Data System (ADS)
Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc
2015-03-01
The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.
NASA Astrophysics Data System (ADS)
Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao
2015-11-01
4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.
Full field spatially-variant image-based resolution modelling reconstruction for the HRRT.
Angelis, Georgios I; Kotasidis, Fotis A; Matthews, Julian C; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J
2015-03-01
Accurate characterisation of the scanner's point spread function across the entire field of view (FOV) is crucial in order to account for spatially dependent factors that degrade the resolution of the reconstructed images. The HRRT users' community resolution modelling reconstruction software includes a shift-invariant resolution kernel, which leads to transaxially non-uniform resolution in the reconstructed images. Unlike previous work to date in this field, this work is the first to model the spatially variant resolution across the entire FOV of the HRRT, which is the highest resolution human brain PET scanner in the world. In this paper we developed a spatially variant image-based resolution modelling reconstruction dedicated to the HRRT, using an experimentally measured shift-variant resolution kernel. Previously, the system response was measured and characterised in detail across the entire FOV of the HRRT, using a printed point source array. The newly developed resolution modelling reconstruction was applied on measured phantom, as well as clinical data and was compared against the HRRT users' community resolution modelling reconstruction, which is currently in use. Results demonstrated improvements both in contrast and resolution recovery, particularly for regions close to the edges of the FOV, with almost uniform resolution recovery across the entire transverse FOV. In addition, because the newly measured resolution kernel is slightly broader with wider tails, compared to the deliberately conservative kernel employed in the HRRT users' community software, the reconstructed images appear to have not only improved contrast recovery (up to 20% for small regions), but also better noise characteristics. PMID:25596999
Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU
Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.
2015-01-01
Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU). PMID:26171373
Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU.
Arefan, D; Talebpour, A; Ahmadinejhad, N; Kamali Asl, A
2015-06-01
Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU). PMID:26171373
Sparse representation for the ISAR image reconstruction
NASA Astrophysics Data System (ADS)
Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.
2016-05-01
In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.
Hofmann, Christian; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc
2014-06-15
Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast
Superresolution image reconstruction from a sequence of aliased imagery.
Young, S Susan; Driggers, Ronald G
2006-07-20
We present a superresolution image reconstruction from a sequence of aliased imagery. The subpixel shifts (displacement) among the images are unknown due to the uncontrolled natural jitter of the imager. A correlation method is utilized to estimate subpixel shifts between each low-resolution aliased image with respect to a reference image. An error-energy reduction algorithm is derived to reconstruct the high-resolution alias-free output image. The main feature of this proposed error-energy reduction algorithm is that we treat the spatial samples from low-resolution images that possess unknown and irregular (uncontrolled) subpixel shifts as a set of constraints to populate an oversampled (sampled above the desired output bandwidth) processing array. The estimated subpixel locations of these samples and their values constitute a spatial domain constraint. Furthermore, the bandwidth of the alias-free image (or the sensor imposed bandwidth) is the criterion used as a spatial frequency domain constraint on the oversampled processing array. The results of testing the proposed algorithm on the simulated low- resolution forward-looking infrared (FLIR) images, real-world FLIR images, and visible images are provided. A comparison of the proposed algorithm with a standard interpolation algorithm for processing the simulated low-resolution FLIR images is also provided. PMID:16826246
Filter and slice thickness selection in SPECT image reconstruction
Ivanovic, M.; Weber, D.A.; Wilson, G.A.; O'Mara, R.E.
1985-05-01
The choice of filter and slice thickness in SPECT image reconstruction as function of activity and linear and angular sampling were investigated in phantom and patient imaging studies. Reconstructed transverse and longitudinal spatial resolution of the system were measured using a line source in a water filled phantom. Phantom studies included measurements of the Data Spectrum phantom; clinical studies included tomographic procedures in 40 patients undergoing imaging of the temporomandibular joint. Slices of the phantom and patient images were evaluated for spatial of the phantom and patient images were evaluated for spatial resolution, noise, and image quality. Major findings include; spatial resolution and image quality improve with increasing linear sampling frequencies over the range of 4-8 mm/p in the phantom images, best spatial resolution and image quality in clinical images were observed at a linear sampling frequency of 6mm/p, Shepp and Logan filter gives the best spatial resolution for phantom studies at the lowest linear sampling frequency; smoothed Shepp and Logan filter provides best quality images without loss of resolution at higher frequencies and, spatial resolution and image quality improve with increased angular sampling frequency in the phantom at 40 c/p but appear to be independent of angular sampling frequency at 400 c/p.
A rapid reconstruction algorithm for three-dimensional scanning images
NASA Astrophysics Data System (ADS)
Xiang, Jiying; Wu, Zhen; Zhang, Ping; Huang, Dexiu
1998-04-01
A `simulated fluorescence' three-dimensional reconstruction algorithm, which is especially suitable for confocal images of partial transparent biological samples, is proposed in this paper. To make the retina projection of the object reappear and to avoid excessive memory consumption, the original image is rotated and compressed before the processing. A left image and a right image are mixed by different colors to increase the sense of stereo. The details originally hidden in deep layers are well exhibited with the aid of an `auxiliary directional source'. In addition, the time consumption is greatly reduced compared with conventional methods such as `ray tracing'. The realization of the algorithm is interpreted by a group of reconstructed images.
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. PMID:22325240
Local fingerprint image reconstruction based on gabor filtering
NASA Astrophysics Data System (ADS)
Bakhtiari, Somayeh; Agaian, Sos S.; Jamshidi, Mo
2012-06-01
In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual information of the image to be preserved while connecting discontinuities correctly by approximating the orientation matrix more genuinely.
Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik
2014-01-01
The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950
Jang, Seong-Wook; Seo, Young-Jin; Yoo, Yon-Sik; Kim, Yoon Sang
2014-01-01
The demand for an accurate and accessible image segmentation to generate 3D models from CT scan data has been increasing as such models are required in many areas of orthopedics. In this paper, to find the optimal image segmentation to create a 3D model of the knee CT data, we compared and validated segmentation algorithms based on both objective comparisons and finite element (FE) analysis. For comparison purposes, we used 1 model reconstructed in accordance with the instructions of a clinical professional and 3 models reconstructed using image processing algorithms (Sobel operator, Laplacian of Gaussian operator, and Canny edge detection). Comparison was performed by inspecting intermodel morphological deviations with the iterative closest point (ICP) algorithm, and FE analysis was performed to examine the effects of the segmentation algorithm on the results of the knee joint movement analysis. PMID:25538950
Alpuche Aviles, Jorge E; Pistorius, Stephen; Gordon, Richard; Elbakri, Idris A
2011-01-01
This work presents a first generation incoherent scatter CT (ISCT) hybrid (analytic-iterative) reconstruction algorithm for accurate ρ{e}imaging of objects with clinically relevant sizes. The algorithm reconstructs quantitative images of ρ{e} within a few iterations, avoiding the challenges of optimization based reconstruction algorithms while addressing the limitations of current analytical algorithms. A 4π detector is conceptualized in order to address the issue of directional dependency and is then replaced with a ring of detectors which detect a constant fraction of the scattered photons. The ISCT algorithm corrects for the attenuation of photons using a limited number of iterations and filtered back projection (FBP) for image reconstruction. This results in a hybrid reconstruction algorithm that was tested with sinograms generated by Monte Carlo (MC) and analytical (AN) simulations. Results show that the ISCT algorithm is weakly dependent on the ρ{e} initial estimate. Simulation results show that the proposed algorithm reconstruct ρ{e} images with a mean error of -1% ± 3% for the AN model and from -6% to -8% for the MC model. Finally, the algorithm is capable of reconstructing qualitatively good images even in the presence of multiple scatter. The proposed algorithm would be suitable for in-vivo medical imaging as long as practical limitations can be addressed. PMID:21422588
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally
An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2015-12-01
Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Super-resolution reconstruction of terahertz images
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Li; Hellicar, Andrew; Guo, Y. Jay
2008-04-01
A prototype of terahertz imaging system has been built in CSIRO. This imager uses a backward wave oscillator as the source and a Schottky diode as the detector. It has a bandwidth of 500-700 GHz and a source power 10 mW. The resolution at 610 GHz is about 0.85 mm. Even though this imaging system is a coherent system, only the signal power is measured at the detector and the phase information of the detected wave is lost. Some initial images of tree leaves, chocolate bars and pinholes have been acquired with this system. In this paper, we report experimental results of an attempt to improve the resolution of this imaging system beyond the limitation of diffraction (super-resolution). Due to the lack of phase information needed for applying any coherent super-resolution algorithms, the performance of the incoherent Richardson-Lucy super-resolution algorithm has been evaluated. Experimental results have demonstrated that the Richardson-Lucy algorithm can significantly improve the resolution of these images in some sample areas and produce some artifacts in other areas. These experimental results are analyzed and discussed.
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. PMID:27572732
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Parallel Image Reconstruction for New Vacuum Solar Telescope
NASA Astrophysics Data System (ADS)
Li, Xue-Bao; Wang, Feng; Xiang, Yong Yuan; Zheng, Yan Fang; Liu, Ying Bo; Deng, Hui; Ji, Kai Fan
2014-04-01
Many advanced ground-based solar telescopes improve the spatial resolution of observation images using an adaptive optics (AO) system. As any AO correction remains only partial, it is necessary to use post-processing image reconstruction techniques such as speckle masking or shift-and-add (SAA) to reconstruct a high-spatial-resolution image from atmospherically degraded solar images. In the New Vacuum Solar Telescope (NVST), the spatial resolution in solar images is improved by frame selection and SAA. In order to overcome the burden of massive speckle data processing, we investigate the possibility of using the speckle reconstruction program in a real-time application at the telescope site. The code has been written in the C programming language and optimized for parallel processing in a multi-processor environment. We analyze the scalability of the code to identify possible bottlenecks, and we conclude that the presented code is capable of being run in real-time reconstruction applications at NVST and future large aperture solar telescopes if care is taken that the multi-processor environment has low latencies between the computation nodes.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Image reconstructions from super-sampled data sets with resolution modeling in PET imaging
Li, Yusheng; Matej, Samuel; Metzler, Scott D.
2014-01-01
Purpose: Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. Methods: The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Results: Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The
Ukwatta, Eranga; Arevalo, Hermenegild; Rajchl, Martin; White, James; Pashakhanloo, Farhad; Prakosa, Adityo; Herzka, Daniel A.; McVeigh, Elliot; Lardo, Albert C.; Trayanova, Natalia A.; Vadakkumpadan, Fijoy
2015-01-01
Purpose: Accurate three-dimensional (3D) reconstruction of myocardial infarct geometry is crucial to patient-specific modeling of the heart aimed at providing therapeutic guidance in ischemic cardiomyopathy. However, myocardial infarct imaging is clinically performed using two-dimensional (2D) late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) techniques, and a method to build accurate 3D infarct reconstructions from the 2D LGE-CMR images has been lacking. The purpose of this study was to address this need. Methods: The authors developed a novel methodology to reconstruct 3D infarct geometry from segmented low-resolution (Lo-res) clinical LGE-CMR images. Their methodology employed the so-called logarithm of odds (LogOdds) function to implicitly represent the shape of the infarct in segmented image slices as LogOdds maps. These 2D maps were then interpolated into a 3D image, and the result transformed via the inverse of LogOdds to a binary image representing the 3D infarct geometry. To assess the efficacy of this method, the authors utilized 39 high-resolution (Hi-res) LGE-CMR images, including 36 in vivo acquisitions of human subjects with prior myocardial infarction and 3 ex vivo scans of canine hearts following coronary ligation to induce infarction. The infarct was manually segmented by trained experts in each slice of the Hi-res images, and the segmented data were downsampled to typical clinical resolution. The proposed method was then used to reconstruct 3D infarct geometry from the downsampled images, and the resulting reconstructions were compared with the manually segmented data. The method was extensively evaluated using metrics based on geometry as well as results of electrophysiological simulations of cardiac sinus rhythm and ventricular tachycardia in individual hearts. Several alternative reconstruction techniques were also implemented and compared with the proposed method. Results: The accuracy of the LogOdds method in reconstructing 3D
Kim, Joshua; Ionascu, Dan; Zhang, Tiezhi
2013-01-01
Purpose: To accelerate iterative algebraic reconstruction algorithms using a cylindrical image grid. Methods: Tetrahedron beam computed tomography (TBCT) is designed to overcome the scatter and detector problems of cone beam computed tomography (CBCT). Iterative algebraic reconstruction algorithms have been shown to mitigate approximate reconstruction artifacts that appear at large cone angles, but clinical implementation is limited by their high computational cost. In this study, a cylindrical voxelization method on a cylindrical grid is developed in order to take advantage of the symmetries of the cylindrical geometry. The cylindrical geometry is a natural fit for the circular scanning trajectory employed in volumetric CT methods such as CBCT and TBCT. This method was implemented in combination with the simultaneous algebraic reconstruction technique (SART). Both two- and three-dimensional numerical phantoms as well as a patient CT image were utilized to generate the projection sets used for reconstruction. The reconstructed images were compared to the original phantoms using a set of three figures of merit (FOM). Results: The cylindrical voxelization on a cylindrical reconstruction grid was successfully implemented in combination with the SART reconstruction algorithm. The FOM results showed that the cylindrical reconstructions were able to maintain the accuracy of the Cartesian reconstructions. In three dimensions, the cylindrical method provided better accuracy than the Cartesian methods. At the same time, the cylindrical method was able to provide a speedup factor of approximately 40 while also reducing the system matrix storage size by 2 orders of magnitude. Conclusions: TBCT image reconstruction using a cylindrical image grid was able to provide a significant improvement in the reconstruction time and a more compact system matrix for storage on the hard drive and in memory while maintaining the image quality provided by the Cartesian voxelization on a
Reconstruction techniques for sparse multistatic linear array microwave imaging
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2014-06-01
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Fast texture and structure image reconstruction using the perceptual hash
NASA Astrophysics Data System (ADS)
Voronin, V. V.; Marchuk, V. I.; Frantc, V. A.; Egiazarian, Karen
2013-02-01
This paper focuses on the fast texture and structure reconstruction of images. The proposed method, applied to images, consists of several steps. The first one deals with the extracted textural features of the input images based on the Law's energy. The pixels around damaged image regions are clustered using these features, that allow to define the correspondence between pixels from different patches. Second, cubic spline curve is applied to reconstruct a structure and to connect edges and contours in the damaged area. The choice of the current pixel to be recovered is decided using the fast marching approach. The Telea method or modifications of the exemplar based method are used after this depending on the classification of the regions where to-be-restored pixel is located. In modification to quickly find patches we use the perceptual hash. Such a strategy allows to get some data structure containing the hashes of similar patches. This enables us to reduce the search procedure to the procedure for "calculations" of the patch. The proposed method is tested on various samples of images, with different geometrical features and compared with the state-of-the-art image inpainting methods; the proposed technique is shown to produce better results in reconstruction of missing small and large objects on test images.
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. PMID:24845059
Skin image reconstruction using Monte Carlo based color generation
NASA Astrophysics Data System (ADS)
Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji
2010-11-01
We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.
Quantitative Photoacoustic Image Reconstruction using Fluence Dependent Chromophores
Cox, B.T.; Laufer, J.G.; Beard, P.C.
2010-01-01
In biomedical photoacoustic imaging the images are proportional to the absorbed optical energy density, and not the optical absorption, which makes it difficult to obtain a quantitatively accurate image showing the concentration of a particular absorbing chromophore from photoacoustic measurements alone. Here it is shown that the spatially varying concentration of a chromophore whose absorption becomes zero above a threshold light fluence can be estimated from photoacoustic images obtained at increasing illumination strengths. This technique provides an alternative to model-based multiwavelength approaches to quantitative photoacoustic imaging, and a new approach to photoacoustic molecular and functional imaging. PMID:21258458
Advances in imaging technologies for planning breast reconstruction
Mohan, Anita T.
2016-01-01
The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790
Advances in imaging technologies for planning breast reconstruction.
Mohan, Anita T; Saint-Cyr, Michel
2016-04-01
The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790
Cascaded diffractive optical elements for improved multiplane image reconstruction.
Gülses, A Alkan; Jenkins, B Keith
2013-05-20
Computer-generated phase-only diffractive optical elements in a cascaded setup are designed by one deterministic and one stochastic algorithm for multiplane image formation. It is hypothesized that increasing the number of elements as wavefront modulators in the longitudinal dimension would enlarge the available solution space, thus enabling enhanced image reconstruction. Numerical results show that increasing the number of holograms improves quality at the output. Design principles, computational methods, and specific conditions are discussed. PMID:23736247
Calibrationless Parallel Imaging Reconstruction Based on Structured Low-Rank Matrix Completion
Shin, Peter J.; Larson, Peder E.Z.; Ohliger, Michael A.; Elad, Michael; Pauly, John M.; Vigneron, Daniel B.; Lustig, Michael
2013-01-01
Purpose A calibrationless parallel imaging reconstruction method, termed simultaneous auto-calibrating and k-space estimation (SAKE), is presented. It is a data-driven, coil-by-coil reconstruction method that does not require a separate calibration step for estimating coil sensitivity information. Methods In SAKE, an under-sampled multi-channel dataset is structured into a single data matrix. Then the reconstruction is formulated as a structured low-rank matrix completion problem. An iterative solution that implements a projection-onto-sets algorithm with singular value thresholding is described. Results Reconstruction results are demonstrated for retrospectively and prospectively under-sampled, multi-channel Cartesian data having no calibration signals. Additionally, non-Cartesian data reconstruction is presented. Finally, improved image quality is demonstrated by combining SAKE with wavelet-based compressed sensing. Conclusion As estimation of coil sensitivity information is not needed, the proposed method could potentially benefit MR applications where acquiring accurate calibration data is limiting or not possible at all. PMID:24248734
A Monte Carlo based three-dimensional dose reconstruction method derived from portal dose images
Elmpt, Wouter J. C. van; Nijsten, Sebastiaan M. J. J. G.; Schiffeleers, Robert F. H.; Dekker, Andre L. A. J.; Mijnheer, Ben J.; Lambin, Philippe; Minken, Andre W. H.
2006-07-15
chamber. It can be concluded that our new dose reconstruction algorithm is able to reconstruct the 3-D dose distribution in phantoms with a high accuracy. This result is obtained by combining portal dose images measured prior to treatment with an accurate dose calculation engine.
Cortical Surface Reconstruction from High-Resolution MR Brain Images
Osechinskiy, Sergey; Kruggel, Frithjof
2012-01-01
Reconstruction of the cerebral cortex from magnetic resonance (MR) images is an important step in quantitative analysis of the human brain structure, for example, in sulcal morphometry and in studies of cortical thickness. Existing cortical reconstruction approaches are typically optimized for standard resolution (~1 mm) data and are not directly applicable to higher resolution images. A new PDE-based method is presented for the automated cortical reconstruction that is computationally efficient and scales well with grid resolution, and thus is particularly suitable for high-resolution MR images with submillimeter voxel size. The method uses a mathematical model of a field in an inhomogeneous dielectric. This field mapping, similarly to a Laplacian mapping, has nice laminar properties in the cortical layer, and helps to identify the unresolved boundaries between cortical banks in narrow sulci. The pial cortical surface is reconstructed by advection along the field gradient as a geometric deformable model constrained by topology-preserving level set approach. The method's performance is illustrated on exvivo images with 0.25–0.35 mm isotropic voxels. The method is further evaluated by cross-comparison with results of the FreeSurfer software on standard resolution data sets from the OASIS database featuring pairs of repeated scans for 20 healthy young subjects. PMID:22481909
Path method for reconstructing images in fluorescence optical tomography
Kravtsenyuk, Olga V; Lyubimov, Vladimir V; Kalintseva, Natalie A
2006-11-30
A reconstruction method elaborated for the optical diffusion tomography of the internal structure of objects containing absorbing and scattering inhomogeneities is considered. The method is developed for studying objects with fluorescing inhomogeneities and can be used for imaging of distributions of artificial fluorophores whose aggregations indicate the presence of various diseases or pathological deviations. (special issue devoted to multiple radiation scattering in random media)
RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES
Reconstruction of Human Lung Morphology Models from Magnetic Resonance Images
T. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)
Statistical image reconstruction methods for simultaneous emission/transmission PET scans
Erdogan, H.; Fessler, J.A.
1996-12-31
Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods.
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Fox, Marsha; Hege, E. Keith; Hluck, Laura; Drummond, Jack; Harvey, David
1997-05-01
Speckle imaging techniques have been shown to mitigate atmospheric-resolution limits, allowing near-diffraction-limited images to be reconstructed. Few images of extended objects reconstructed by use of these techniques have been published, and most of these results are for relatively bright objects. We present image reconstructions of an orbiting Molniya 3 spacecraft from data collected by use of a 2.3-m ground-based telescope. The apparent brightness of the satellite was 15th visual magnitude. Power-spectrum and bispectrum speckle imaging techniques are used prior to image reconstruction to ameliorate atmospheric blurring. We discuss how these images, although poorly resolved, can be used to provide information on the satellite s functional status. It is shown that our previously published optimal algorithms produce a higher-quality image than do conventional speckle imaging methods.
NASA Astrophysics Data System (ADS)
Gao, Mingliang; Teng, Qizhi; He, Xiaohai; Zuo, Chen; Li, ZhengJi
2016-01-01
Three-dimensional (3D) structures are useful for studying the spatial structures and physical properties of porous media. A 3D structure can be reconstructed from a single two-dimensional (2D) training image (TI) by using mathematical modeling methods. Among many reconstruction algorithms, an optimal-based algorithm was developed and has strong stability. However, this type of algorithm generally uses an autocorrelation function (which is unable to accurately describe the morphological features of porous media) as its objective function. This has negatively affected further research on porous media. To accurately reconstruct 3D porous media, a pattern density function is proposed in this paper, which is based on a random variable employed to characterize image patterns. In addition, the paper proposes an original optimal-based algorithm called the pattern density function simulation; this algorithm uses a pattern density function as its objective function, and adopts a multiple-grid system. Meanwhile, to address the key point of algorithm reconstruction speed, we propose the use of neighborhood statistics, the adjacent grid and reversed phase method, and a simplified temperature-controlled mechanism. The pattern density function is a high-order statistical function; thus, when all grids in the reconstruction results converge in the objective functions, the morphological features and statistical properties of the reconstruction results will be consistent with those of the TI. The experiments include 2D reconstruction using one artificial structure, and 3D reconstruction using battery materials and cores. Hierarchical simulated annealing and single normal equation simulation are employed as the comparison algorithms. The autocorrelation function, linear path function, and pore network model are used as the quantitative measures. Comprehensive tests show that 3D porous media can be reconstructed accurately from a single 2D training image by using the method proposed
Analysis operator learning and its application to image reconstruction.
Hawe, Simon; Kleinsteuber, Martin; Diepold, Klaus
2013-06-01
Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on l(p)-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques. PMID:23412611
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Bayesian PET image reconstruction incorporating anato-functional joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2009-12-01
We developed a maximum a posterior (MAP) reconstruction method for positron emission tomography (PET) image reconstruction incorporating magnetic resonance (MR) image information, with the joint entropy between the PET and MR image features serving as the regularization constraint. A non-parametric method was used to estimate the joint probability density of the PET and MR images. Using realistically simulated PET and MR human brain phantoms, the quantitative performance of the proposed algorithm was investigated. Incorporation of the anatomic information via this technique, after parameter optimization, was seen to dramatically improve the noise versus bias tradeoff in every region of interest, compared to the result from using conventional MAP reconstruction. In particular, hot lesions in the FDG PET image, which had no anatomical correspondence in the MR image, also had improved contrast versus noise tradeoff. Corrections were made to figures 3, 4 and 6, and to the second paragraph of section 3.1 on 13 November 2009. The corrected electronic version is identical to the print version.
Groussin, Mathieu; Hobbs, Joanne K; Szöllősi, Gergely J; Gribaldo, Simonetta; Arcus, Vickery L; Gouy, Manolo
2015-01-01
The resurrection of ancestral proteins provides direct insight into how natural selection has shaped proteins found in nature. By tracing substitutions along a gene phylogeny, ancestral proteins can be reconstructed in silico and subsequently synthesized in vitro. This elegant strategy reveals the complex mechanisms responsible for the evolution of protein functions and structures. However, to date, all protein resurrection studies have used simplistic approaches for ancestral sequence reconstruction (ASR), including the assumption that a single sequence alignment alone is sufficient to accurately reconstruct the history of the gene family. The impact of such shortcuts on conclusions about ancestral functions has not been investigated. Here, we show with simulations that utilizing information on species history using a model that accounts for the duplication, horizontal transfer, and loss (DTL) of genes statistically increases ASR accuracy. This underscores the importance of the tree topology in the inference of putative ancestors. We validate our in silico predictions using in vitro resurrection of the LeuB enzyme for the ancestor of the Firmicutes, a major and ancient bacterial phylum. With this particular protein, our experimental results demonstrate that information on the species phylogeny results in a biochemically more realistic and kinetically more stable ancestral protein. Additional resurrection experiments with different proteins are necessary to statistically quantify the impact of using species tree-aware gene trees on ancestral protein phenotypes. Nonetheless, our results suggest the need for incorporating both sequence and DTL information in future studies of protein resurrections to accurately define the genotype-phenotype space in which proteins diversify. PMID:25371435
Image reconstruction from Pulsed Fast Neutron Analysis
NASA Astrophysics Data System (ADS)
Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob
1999-06-01
Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.
Image reconstruction from Pulsed Fast Neutron Analysis
Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob
1999-06-10
Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.
Comparison of image reconstruction methods for structured illumination microscopy
NASA Astrophysics Data System (ADS)
Lukeš, Tomas; Hagen, Guy M.; Křížek, Pavel; Švindrych, Zdeněk.; Fliegel, Karel; Klíma, Miloš
2014-05-01
Structured illumination microscopy (SIM) is a recent microscopy technique that enables one to go beyond the diffraction limit using patterned illumination. The high frequency information is encoded through aliasing into the observed image. By acquiring multiple images with different illumination patterns aliased components can be separated and a highresolution image reconstructed. Here we investigate image processing methods that perform the task of high-resolution image reconstruction, namely square-law detection, scaled subtraction, super-resolution SIM (SR-SIM), and Bayesian estimation. The optical sectioning and lateral resolution improvement abilities of these algorithms were tested under various noise level conditions on simulated data and on fluorescence microscopy images of a pollen grain test sample and of a cultured cell stained for the actin cytoskeleton. In order to compare the performance of the algorithms, the following objective criteria were evaluated: Signal to Noise Ratio (SNR), Signal to Background Ratio (SBR), circular average of the power spectral density and the S3 sharpness index. The results show that SR-SIM and Bayesian estimation combine illumination patterned images more effectively and provide better lateral resolution in exchange for more complex image processing. SR-SIM requires one to precisely shift the separated spectral components to their proper positions in reciprocal space. High noise levels in the raw data can cause inaccuracies in the shifts of the spectral components which degrade the super-resolved image. Bayesian estimation has proven to be more robust to changes in noise level and illumination pattern frequency.
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
Accurate reconstruction of the thermal conductivity depth profile in case hardened steel
NASA Astrophysics Data System (ADS)
Celorrio, Ricardo; Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Mandelis, Andreas
2010-04-01
The problem of retrieving a nonhomogeneous thermal conductivity profile from photothermal radiometry data is addressed from the perspective of a stabilized least square fitting algorithm. We have implemented an inversion method with several improvements: (a) a renormalization of the experimental data which removes not only the instrumental factor, but the constants affecting the amplitude and the phase as well, (b) the introduction of a frequency weighting factor in order to balance the contribution of high and low frequencies in the inversion algorithm, (c) the simultaneous fitting of amplitude and phase data, balanced according to their experimental noises, (d) a modified Tikhonov regularization procedure has been introduced to stabilize the inversion, and (e) the Morozov discrepancy principle has been used to stop the iterative process automatically, according to the experimental noise, to avoid "overfitting" of the experimental data. We have tested this improved method by fitting theoretical data generated from a known conductivity profile. Finally, we have applied our method to real data obtained in a hardened stainless steel plate. The reconstructed in-depth thermal conductivity profile exhibits low dispersion, even at the deepest locations, and is in good anticorrelation with the hardness indentation test.
Statistical reconstruction algorithms for continuous wave electron spin resonance imaging
NASA Astrophysics Data System (ADS)
Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon
2013-06-01
Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; Wang, Zhong
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; Wang, Zhong
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically forms hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.
A Case Series of Rapid Prototyping and Intraoperative Imaging in Orbital Reconstruction
Lim, Christopher G.T.; Campbell, Duncan I.; Cook, Nicholas; Erasmus, Jason
2014-01-01
In Christchurch Hospital, rapid prototyping (RP) and intraoperative imaging are the standard of care in orbital trauma and has been used since February 2013. RP allows the fabrication of an anatomical model to visualize complex anatomical structures which is dimensionally accurate and cost effective. This assists diagnosis, planning, and preoperative implant adaptation for orbital reconstruction. Intraoperative imaging involves a computed tomography scan during surgery to evaluate surgical implants and restored anatomy and allows the clinician to correct errors in implant positioning that may occur during the same procedure. This article aims to demonstrate the potential clinical and cost saving benefits when both these technologies are used in orbital reconstruction which minimize the need for revision surgery. PMID:26000080
Holographic particle image velocimetry: analysis using a conjugate reconstruction geometry
NASA Astrophysics Data System (ADS)
Barnhart, D. H.; Halliwell, N. A.; Coupland, J. M.
2000-10-01
Holographic recording techniques have recently been studied as a means to extend two-component, planar particle image velocimetry (PIV) techniques for three-component, whole-field velocity measurements. In a similar manner to two-component PIV, three-component, holographic PIV (HPIV) uses correlation-based techniques to extract particle displacement fields from double-exposure holograms. Since a holographic image contains information concerning both the phase and the amplitude of the scattered field it is possible to correlate either the intensity or the complex amplitude. In previous work we have shown that optical methods to compute the autocorrelation of the complex amplitude are inherently more tolerant to aberrations introduced in the reconstruction process, Coupland, Halliwell, Proc. Roy. Soc. 453 (1960) (1997) 1066. In this paper we introduce a new method of holographic recording and reconstruction that allows a constant image shift to be introduced to the particle image displacement. The technique, which we call conjugate reconstruction, resolves directional ambiguity and extends the dynamic range of HPIV. The theory of this method is examined in detail and a relationship between the image and object displacement is derived. Experimental verification of the theory is presented.
Toward 5D image reconstruction for optical interferometry
NASA Astrophysics Data System (ADS)
Baron, Fabien; Kloppenborg, Brian; Monnier, John
2012-07-01
We report on our progress toward a flexible image reconstruction software for optical interferometry capable of "5D imaging" of stellar surfaces. 5D imaging is here defined as the capability to image directly one or several stars in three dimensions, with both the time and wavelength dependencies taken into account during the reconstruction process. Our algorithm makes use of the Healpix (Gorski et al., 2005) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid geometry, and the Open Compute Language (OpenCL) framework for all other computations. We use the Monte Carlo Markov Chain software SQUEEZE to solve the image reconstruction problem on the surfaces of these stars. Finally, the Compressed Sensing and Bayesian Evidence paradigms are employed to determine the best regularization for spotted stars. Our algorithm makes use of the Healpix (reference needed) sphere partition scheme to tesselate the stellar surface, 3D Open Graphics Language (OpenGL) to model the spheroid, and the Open Compute Language (OpenCL) framework to model the Roche gravitational potential equation.
Atmospheric isoplanatism and astronomical image reconstruction on Mauna Kea
Cowie, L.L.; Songaila, A.
1988-07-01
Atmospheric isoplanatism for visual wavelength image-reconstruction applications was measured on Mauna Kea in Hawaii. For most nights the correlation of the transform functions is substantially wider than the long-exposure transform function at separations up to 30 arcsec. Theoretical analysis shows that this is reasonable if the mean Fried parameter is approximately 30 cm at 5500 A. Reconstructed image quality may be described by a Gaussian with a FWHM of lambda/s/sub 0/. Under average conditions, s/sub 0/ (30 arcsec) exceeds 55 cm at 7000 A. The results show that visual image quality in the 0.1--0.2 arcsec range is obtainable over much of the sky with large ground-based telescopes on this site.
Colored three-dimensional reconstruction of vehicular thermal infrared images
NASA Astrophysics Data System (ADS)
Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi
2015-06-01
Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.
D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods
NASA Astrophysics Data System (ADS)
Hosseinian, S.; Arefi, H.
2015-12-01
The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform
Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo
2015-09-09
Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l1 regularization terms. The Split Bregman Algorithm provides a fast explicit solutionmore » for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less
Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform
Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo
2015-09-09
Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l^{1} regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l^{1} regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l^{1} regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l^{1} regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.
PET Image Reconstruction Using Information Theoretic Anatomical Priors
Somayajula, Sangeetha; Panagiotou, Christos; Rangarajan, Anand; Li, Quanzheng; Arridge, Simon R.
2011-01-01
We describe a nonparametric framework for incorporating information from co-registered anatomical images into positron emission tomographic (PET) image reconstruction through priors based on information theoretic similarity measures. We compare and evaluate the use of mutual information (MI) and joint entropy (JE) between feature vectors extracted from the anatomical and PET images as priors in PET reconstruction. Scale-space theory provides a framework for the analysis of images at different levels of detail, and we use this approach to define feature vectors that emphasize prominent boundaries in the anatomical and functional images, and attach less importance to detail and noise that is less likely to be correlated in the two images. Through simulations that model the best case scenario of perfect agreement between the anatomical and functional images, and a more realistic situation with a real magnetic resonance image and a PET phantom that has partial volumes and a smooth variation of intensities, we evaluate the performance of MI and JE based priors in comparison to a Gaussian quadratic prior, which does not use any anatomical information. We also apply this method to clinical brain scan data using F18 Fallypride, a tracer that binds to dopamine receptors and therefore localizes mainly in the striatum. We present an efficient method of computing these priors and their derivatives based on fast Fourier transforms that reduce the complexity of their convolution-like expressions. Our results indicate that while sensitive to initialization and choice of hyperparameters, information theoretic priors can reconstruct images with higher contrast and superior quantitation than quadratic priors. PMID:20851790
NASA Astrophysics Data System (ADS)
Hong, Daeki; Cho, Heemoon; Cho, Hyosung; Choi, Sungil; Je, Uikyu; Park, Yeonok; Park, Chulkyu; Lim, Hyunwoo; Park, Soyoung; Woo, Taeho
2015-11-01
In this work, we performed a feasibility study on the three-dimensional (3D) image reconstruction in a truncated Archimedean-like spiral geometry with a long-rectangular detector for application to high-accurate, cost-effective dental x-ray imaging. Here an x-ray tube and a detector rotate together around the rotational axis several times and, concurrently, the detector moves horizontally in the detector coordinate at a constant speed to cover the whole imaging volume during the projection data acquisition. We established a table-top setup which mainly consists of an x-ray tube (60 kVp, 5 mA), a narrow CMOS-type detector (198-μm pixel resolution, 184 (W)×1176 (H) pixel dimension), and a rotational stage for sample mounting and performed a systematic experiment to demonstrate the viability of the proposed approach to volumetric dental imaging. For the image reconstruction, we employed a compressed-sensing (CS)-based algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate reconstruction. We successfully reconstructed 3D images of considerably high quality and investigated the image characteristics in terms of the image value profile, the contrast-to-noise ratio (CNR), and the spatial resolution.
Colorful holographic imaging reconstruction based on one thin phase plate
NASA Astrophysics Data System (ADS)
Zhu, Jing; Song, Qiang; Wang, Jian; Yue, Weirui; Zhang, Fang; Huang, Huijie
2014-11-01
One method of realizing color holographic imaging using one thin diffractive optical element (DOE) is proposed. This method can reconstruct a two-dimensional color image with one phase plate at user defined distance from DOE. For improving the resolution ratio of reproduced color images, the DOE is optimized by combining Gerchberg-Saxton algorithm and compensation algorithm. To accelerate the computational process, the Graphic Processing Unit (GPU) is used. In the end, the simulation result was analyzed to verify the validity of this method.
Image reconstruction techniques applied to nuclear mass models
NASA Astrophysics Data System (ADS)
Morales, Irving O.; Isacker, P. Van; Velazquez, V.; Barea, J.; Mendoza-Temis, J.; Vieyra, J. C. López; Hirsch, J. G.; Frank, A.
2010-02-01
A new procedure is presented that combines well-known nuclear models with image reconstruction techniques. A color-coded image is built by taking the differences between measured masses and the predictions given by the different theoretical models. This image is viewed as part of a larger array in the (N,Z) plane, where unknown nuclear masses are hidden, covered by a “mask.” We apply a suitably adapted deconvolution algorithm, used in astronomical observations, to “open the window” and see the rest of the pattern. We show that it is possible to improve significantly mass predictions in regions not too far from measured nuclear masses.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
The SRT reconstruction algorithm for semiquantification in PET imaging
Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.
2015-10-15
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanner. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present an LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3-D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the nonnegative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
NASA Astrophysics Data System (ADS)
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which
Accuracy of quantitative reconstructions in SPECT/CT imaging
NASA Astrophysics Data System (ADS)
Shcherbinin, S.; Celler, A.; Belhocine, T.; van der Werf, R.; Driedger, A.
2008-09-01
The goal of this study was to determine the quantitative accuracy of our OSEM-APDI reconstruction method based on SPECT/CT imaging for Tc-99m, In-111, I-123, and I-131 isotopes. Phantom studies were performed on a SPECT/low-dose multislice CT system (Infinia-Hawkeye-4 slice, GE Healthcare) using clinical acquisition protocols. Two radioactive sources were centrally and peripherally placed inside an anthropometric Thorax phantom filled with non-radioactive water. Corrections for attenuation, scatter, collimator blurring and collimator septal penetration were applied and their contribution to the overall accuracy of the reconstruction was evaluated. Reconstruction with the most comprehensive set of corrections resulted in activity estimation with error levels of 3-5% for all the isotopes.
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
Min, Jonghwan; Pua, Rizza; Cho, Seungryong; Kim, Insoo; Han, Bumsoo
2015-11-15
Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the
Comparison of power spectra for tomosynthesis projections and reconstructed images
Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert
2009-01-01
Burgess et al. [Med. Phys. 28, 419–437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent β which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean β averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in β for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of β for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability. PMID:19544793
Comparison of power spectra for tomosynthesis projections and reconstructed images
Engstrom, Emma; Reiser, Ingrid; Nishikawa, Robert
2009-05-15
Burgess et al. [Med. Phys. 28, 419-437 (2001)] showed that the power spectrum of mammographic breast background follows a power law and that lesion detectability is affected by the power-law exponent {beta} which measures the amount of structure in the background. Following the study of Burgess et al., the authors measured and compared the power-law exponent of mammographic backgrounds in tomosynthesis projections and reconstructed slices to investigate the effect of tomosynthesis imaging on background structure. Our data set consisted of 55 patient cases. For each case, regions of interest (ROIs) were extracted from both projection images and reconstructed slices. The periodogram of each ROI was computed by taking the squared modulus of the Fourier transform of the ROI. The power-law exponent was determined for each periodogram and averaged across all ROIs extracted from all projections or reconstructed slices for each patient data set. For the projections, the mean {beta} averaged across the 55 cases was 3.06 (standard deviation of 0.21), while it was 2.87 (0.24) for the corresponding reconstructions. The difference in {beta} for a given patient between the projection ROIs and the reconstructed ROIs averaged across the 55 cases was 0.194, which was statistically significant (p<0.001). The 95% CI for the difference between the mean value of {beta} for the projections and reconstructions was [0.170, 0.218]. The results are consistent with the observation that the amount of breast structure in the tomosynthesis slice is reduced compared to projection mammography and that this may lead to improved lesion detectability.
Tomographic image reconstruction and rendering with texture-mapping hardware
Azevedo, S.G.; Cabral, B.K.; Foran, J.
1994-07-01
The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.
Performance validation of phase diversity image reconstruction techniques
NASA Astrophysics Data System (ADS)
Hirzberger, J.; Feller, A.; Riethmüller, T. L.; Gandorfer, A.; Solanki, S. K.
2011-05-01
We present a performance study of a phase diversity (PD) image reconstruction algorithm based on artificial solar images obtained from MHD simulations and on seeing-free data obtained with the SuFI instrument on the Sunrise balloon borne observatory. The artificial data were altered by applying different levels of degradation with synthesised wavefront errors and noise. The PD algorithm was modified by changing the number of fitted polynomials, the shape of the pupil and the applied noise filter. The obtained reconstructions are evaluated by means of the resulting rms intensity contrast and by the conspicuousness of appearing artifacts. The results show that PD is a robust method which consistently recovers the initial unaffected image contents. The efficiency of the reconstruction is, however, strongly dependent on the number of used fitting polynomials and the noise level of the images. If the maximum number of fitted polynomials is higher than 21, artifacts have to be accepted and for noise levels higher than 10-3 the commonly used noise filtering techniques are not able to avoid amplification of spurious structures.
Edge-Preserving PET Image Reconstruction Using Trust Optimization Transfer
Wang, Guobao; Qi, Jinyi
2014-01-01
Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization. The most commonly used quadratic penalty often over-smoothes sharp edges and fine features in reconstructed images, while non-quadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or non-smooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the non-smooth ℓ1 regularization. PMID:25438302
Tomographic image reconstruction and rendering with texture-mapping hardware
NASA Astrophysics Data System (ADS)
Azevedo, Stephen G.; Cabral, Brian K.; Foran, Jim
1994-07-01
The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to a graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture- mapping hardware, such as that on the silicon Graphics Reality Engine, shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in our case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. Our technique can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.
Blackman, Arne V.; Grabuschnig, Stefan; Legenstein, Robert; Sjöström, P. Jesper
2014-01-01
Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH). An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI) stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP) forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP), however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate due to the occasional poor outcome of histological processing. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use
Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.
Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H
2015-11-01
Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness. PMID:26169322
NASA Astrophysics Data System (ADS)
Fahrig, Rebecca; Fox, Allan J.; Holdsworth, David W.
1996-04-01
Treatment of subarachnoid aneurysms with endovascular techniques (e.g. placement of Guglielmi coils) is currently limited by the inability to visualize the neck of the aneurysm after the initial coils have been placed. Coils projecting into parent vessels may cause thrombosis, while incomplete filling leads to regrowth of the aneurysm. Since the procedure is performed using a gantry-mounted x-ray image intensifier (XRII), we have used this system to obtain 2- dimensional (2-D) images over approximately 200 degrees and reconstructed a 3-D image of the embolizing coil, residual aneurysm, and the parent vessel. The required data can be acquired, reconstructed and presented to the neuroradiologist during the interventional procedure. We have characterized an existing clinical C-arm radiographic system, and have developed correction procedures which provide a consistent and adequate data set for standard CT reconstruction using convolution-backprojection techniques. The angle of acquisition for each image is recorded using a hub-mounted angle encoder accurate to within plus or minus 0.3 degrees. We have characterized the XRII distortion over the rotation, and can correct images acquired at any known angle to within plus or minus 0.06 pixels. We have also measured the motion of the center of rotation (reproducible to within plus or minus 0.13 pixels) and correct for the displacement using an image shift and interpolation algorithm. Finally, we have investigated the effects on CT reconstruction of variable dilutions of contrast agent during the cardiac cycle, and have shown the contrast-to-noise ratio (CNR), in the absence of photon noise, to be 28. This is larger than the CNR which we achieve due to photon noise alone.
Brigadoi, Sabrina; Powell, Samuel; Cooper, Robert J.; Dempsey, Laura A.; Arridge, Simon; Everdell, Nick; Hebden, Jeremy; Gibson, Adam P.
2015-01-01
In diffuse optical tomography (DOT), real-time image reconstruction of oxy- and deoxy-haemoglobin changes occurring in the brain could give valuable information in clinical care settings. Although non-linear reconstruction techniques could provide more accurate results, their computational burden makes them unsuitable for real-time applications. Linear techniques can be employed under the assumption that the expected change in absorption is small. Several approaches exist, differing primarily in their handling of regularization and the noise statistics. In real experiments, it is impossible to compute the true noise statistics, because of the presence of physiological oscillations in the measured data. This is even more critical in real-time applications, where no off-line filtering and averaging can be performed to reduce the noise level. Therefore, many studies substitute the noise covariance matrix with the identity matrix. In this paper, we examined two questions: does using the noise model with realistic, imperfect data yield an improvement in image quality compared to using the identity matrix; and what is the difference in quality between online and offline reconstructions. Bespoke test data were created using a novel process through which simulated changes in absorption were added to real resting-state DOT data. A realistic multi-layer head model was used as the geometry for the reconstruction. Results validated our assumptions, highlighting the validity of computing the noise statistics from the measured data for online image reconstruction, which was performed at 2 Hz. Our results can be directly extended to a real application where real-time imaging is required. PMID:26713189
An efficient simultaneous reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Soria, Julio
2009-10-01
To date, Tomo-PIV has involved the use of the multiplicative algebraic reconstruction technique (MART), where the intensity of each 3D voxel is iteratively corrected to satisfy one recorded projection, or pixel intensity, at a time. This results in reconstruction times of multiple hours for each velocity field and requires considerable computer memory in order to store the associated weighting coefficients and intensity values for each point in the volume. In this paper, a rapid and less memory intensive reconstruction algorithm is presented based on a multiplicative line-of-sight (MLOS) estimation that determines possible particle locations in the volume, followed by simultaneous iterative correction. Reconstructions of simulated images are presented for two simultaneous algorithms (SART and SMART) as well as the now standard MART algorithm, which indicate that the same accuracy as MART can be achieved 5.5 times faster or 77 times faster with 15 times less memory if the processing and storage of the weighting matrix is considered. Application of MLOS-SMART and MART to a turbulent boundary layer at Re θ = 2200 using a 4 camera Tomo-PIV system with a volume of 1,000 × 1,000 × 160 voxels is discussed. Results indicate improvements in reconstruction speed of 15 times that of MART with precalculated weighting matrix, or 65 times if calculation of the weighting matrix is considered. Furthermore the memory needed to store a large weighting matrix and volume intensity is reduced by almost 40 times in this case.
A dual oxygenation and fluorescence imaging platform for reconstructive surgery
NASA Astrophysics Data System (ADS)
Ashitate, Yoshitomo; Nguyen, John N.; Venugopal, Vivek; Stockdale, Alan; Neacsu, Florin; Kettenring, Frank; Lee, Bernard T.; Frangioni, John V.; Gioux, Sylvain
2013-03-01
There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively, leading to a large number of failures, patient morbidity, and increased healthcare costs. Because near-infrared (NIR) optical imaging is safe, noncontact, inexpensive, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. These capabilities are well illustrated through the clinical translation of fluorescence imaging during oncologic surgery. In this work, we introduce a novel imaging platform that combines two complementary NIR optical modalities: oxygenation imaging and fluorescence imaging. We validated this platform during facial reconstructive surgery on large animals approaching the size of humans. We demonstrate that NIR fluorescence imaging provides identification of perforator arteries, assesses arterial perfusion, and can detect thrombosis, while oxygenation imaging permits the passive monitoring of tissue vital status, as well as the detection and origin of vascular compromise simultaneously. Together, the two methods provide a comprehensive approach to identifying problems and intervening in real time during surgery before irreparable damage occurs. Taken together, this novel platform provides fully integrated and clinically friendly endogenous and exogenous NIR optical imaging for improved image-guided intervention during surgery.
Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation
Reconstruction of hyperspectral image using matting model for classification
NASA Astrophysics Data System (ADS)
Xie, Weiying; Li, Yunsong; Ge, Chiru
2016-05-01
Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.
NASA Astrophysics Data System (ADS)
Li, Zhiyang
2010-10-01
A method for high precision optical wavefront reconstruction using low resolution spatial light modulators (SLMs) was proposed. It utilizes an adiabatic waveguide taper consisting of a plurality of single-mode waveguides to decompose an incident light field into simple fundamental modes of the single-mode waveguides. By digital generation of the conjugate fields of those simple fundamental modes a field proportional to the original incident light field might be reconstructed accurately based on reciprocity. Devices based on the method using transparent and reflective SLMs possess no aberration like that of a conventional optic lens and are able to achieve diffraction limited resolution. Specifically on the surface of the narrow end of a taper a resolution much higher than half of the wavelength is attainable. The device may work in linear mode and possesses unlimited theoretical 3D space-bandwidth product (SBP). The SBP of a real device is limited by the accuracy of SLMs. A pair of 8-bit SLMs with 1000 × 1000 = 10 6 pixels could provide a SBP of about 5 × 10 4. The SBP may expand by 16 times if 10-bit SLMs with the same number of pixels are employed or 16 successive frames are used to display one scene. The device might be used as high precision optical tweezers, or employed for continuous or discrete real-time 3D display, 3D measurement, machine vision, etc.
NASA Astrophysics Data System (ADS)
Chua, S. Y.; Wang, X.; Guo, N.; Tan, C. S.; Chai, T. Y.; Seet, G. L.
2016-04-01
This paper performs an experimental investigation on the TOF imaging profile which strongly influences the quality of reconstruction to accomplish accurate range sensing. From our analysis, the reflected intensity profile recorded appears to deviate from Gaussian model which is commonly assumed and can be perceived as a mixture of noises and actual reflected signal. Noise-weighted Average range calculation is therefore proposed to alleviate noise influence based on the signal detection threshold and system noises. From our experimental result, this alternative range solution demonstrates better accuracy as compared to the conventional weighted average method and proven as a para-axial correction to improve range reconstruction in 3D gated imaging system.
Image reconstruction algorithms for DOIS: a diffractive optic image spectrometer
NASA Astrophysics Data System (ADS)
Lyons, Denise M.; Whitcomb, Kevin J.
1996-06-01
The diffractive optic imaging spectrometer, DOIS, is a compact, economical, rugged, programmable, multi-spectral imager. The design implements a conventional CCD camera and emerging diffractive optical element (DOE) technology in an elegant configuration, adding spectroscopy capabilities to current imaging systems (Lyons 1995). This paper reports on the visible prototype DOIS that was designed, fabricated and characterized. Algorithms are presented for simulation and post-detection processing with digital image processing techniques. This improves the spectral resolution by removing unwanted blurred components from the spectral images. DOIS is a practical image spectrometer that can be built to operate at ultraviolet, visible or infrared wavelengths for applications in surveillance, remote sensing, law enforcement, environmental monitoring, laser communications, and laser counter intelligence.
Cloud based toolbox for image analysis, processing and reconstruction tasks.
Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A
2015-01-01
This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109
Research on high-order approximation of radiative transfer equation for image reconstruction
NASA Astrophysics Data System (ADS)
Ma, Wenjuan; Gao, Feng; Wu, Linhui; Yi, Xi; Zhu, Pingping; Zhao, Huijuan
2011-03-01
In this article, we derive the two-dimensional spherical harmonics equations to three-order (P3) of Radiative Transfer Equation for anisotropic scattering. We also solved this equations using Galerkin finite element method and compared the solutions with the first-order diffusion equation and Monte Carlo simulation. the benchmark problems are tested, and we found that the developed three-order model with high absorb coefficient is able to significantly improve the diffusion solution in circle geometry, and the radiance distribution close to light source is more accurate. It is significant for accurate modeling of light propagation in small tissue geometries in small animal imaging. Then, the inverse model for the simultaneous reconstruction of the absorption images is proposed based on P3 equations, and the feasibility and effectiveness of this method are proved by the simulation.
NASA Astrophysics Data System (ADS)
Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark
2013-04-01
Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.
Research on image matching method of big data image of three-dimensional reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong
2015-12-01
Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
3D scene reconstruction from multi-aperture images
NASA Astrophysics Data System (ADS)
Mao, Miao; Qin, Kaihuai
2014-04-01
With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.
Image reconstruction and optimization using a terahertz scanned imaging system
NASA Astrophysics Data System (ADS)
Yıldırım, İhsan Ozan; Özkan, Vedat A.; Idikut, Fırat; Takan, Taylan; Şahin, Asaf B.; Altan, Hakan
2014-10-01
Due to the limited number of array detection architectures in the millimeter wave to terahertz region of the electromagnetic spectrum, imaging schemes with scan architectures are typically employed. In these configurations the interplay between the frequencies used to illuminate the scene and the optics used play an important role in the quality of the formed image. Using a multiplied Schottky-diode based terahertz transceiver operating at 340 GHz, in a stand-off detection scheme; the effect of image quality of a metal target was assessed based on the scanning speed of the galvanometer mirrors as well as the optical system that was constructed. Background effects such as leakage on the receiver were minimized by conditioning the signal at the output of the transceiver. Then, the image of the target was simulated based on known parameters of the optical system and the measured images were compared to the simulation. By using an image quality index based on χ2 algorithm the simulated and measured images were found to be in good agreement with a value of χ2 = 0 .14. The measurements as shown here will aid in the future development of larger stand-off imaging systems that work in the terahertz frequency range.
Rozario, T; Bereg, S; Chiu, T; Liu, H; Kearney, V; Jiang, L; Mao, W
2014-06-01
Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix of small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.
NASA Astrophysics Data System (ADS)
Jin, Ge; Lee, Sang-Joon; Hahn, James K.; Bielamowicz, Steven; Mittal, Rajat; Walsh, Raymond
2007-03-01
The medialization laryngoplasty is a surgical procedure to improve the voice function of the patient with vocal fold paresis and paralysis. An image guided system for the medialization laryngoplasty will help the surgeons to accurately place the implant and thus reduce the failure rates of the surgery. One of the fundamental challenges in image guided system is to accurately register the preoperative radiological data to the intraoperative anatomical structure of the patient. In this paper, we present a combined surface and fiducial based registration method to register the preoperative 3D CT data to the intraoperative surface of larynx. To accurately model the exposed surface area, a structured light based stereo vision technique is used for the surface reconstruction. We combined the gray code pattern and multi-line shifting to generate the intraoperative surface of the larynx. To register the point clouds from the intraoperative stage to the preoperative 3D CT data, a shape priori based ICP method is proposed to quickly register the two surfaces. The proposed approach is capable of tracking the fiducial markers and reconstructing the surface of larynx with no damage to the anatomical structure. We used off-the-shelf digital cameras, LCD projector and rapid 3D prototyper to develop our experimental system. The final RMS error in the registration is less than 1mm.
Neutron source reconstruction from pinhole imaging at National Ignition Facility.
Volegov, P; Danly, C R; Fittinghoff, D N; Grim, G P; Guler, N; Izumi, N; Ma, T; Merrill, F E; Warrick, A L; Wilde, C H; Wilson, D C
2014-02-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the ignition stage of inertial confinement fusion (ICF) implosions at NIF. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, single-sided tapers in gold. These apertures, which have triangular cross sections, produce distortions in the image, and the extended nature of the pinhole results in a non-stationary or spatially varying point spread function across the pinhole field of view. In this work, we have used iterative Maximum Likelihood techniques to remove the non-stationary distortions introduced by the aperture to reconstruct the underlying neutron source distributions. We present the detailed algorithms used for these reconstructions, the stopping criteria used and reconstructed sources from data collected at NIF with a discussion of the neutron imaging performance in light of other diagnostics. PMID:24593362
Model-based reconstructive elasticity imaging of deep venous thrombosis.
Aglyamov, Salavat; Skovoroda, Andrei R; Rubin, Jonathan M; O'Donnell, Matthew; Emelianov, Stanislav Y
2004-05-01
Deep venous thrombosis (DVT) and its sequela, pulmonary embolism, is a significant clinical problem. Once detected, DVT treatment is based on the age of the clot. There are no good noninvasive methods, however, to determine clot age. Previously, we demonstrated that imaging internal mechanical strains can identify and possibly age thrombus in a deep vein. In this study the deformation geometry for DVT elasticity imaging and its effect on Young's modulus estimates is addressed. A model-based reconstruction method is presented to estimate elasticity in which the clot-containing vessel is modeled as a layered cylinder. Compared to an unconstrained approach in reconstructive elasticity imaging, the proposed model-based approach has several advantages: only one component of the strain tensor is used; the minimization procedure is very fast; the method is highly efficient because an analytic solution of the forward elastic problem is used; and the method is not very sensitive to the details of the external load pattern--a characteristic that is important for free-hand, external, surface-applied deformation. The approach was tested theoretically using a numerical model, and experimentally on both tissue-like phantoms and an animal model of DVT. Results suggest that elasticity reconstruction may prove to be a practical adjunct to triplex scanning to detect, diagnose, and stage DVT. PMID:15217230
Neutron source reconstruction from pinhole imaging at National Ignition Facility
NASA Astrophysics Data System (ADS)
Volegov, P.; Danly, C. R.; Fittinghoff, D. N.; Grim, G. P.; Guler, N.; Izumi, N.; Ma, T.; Merrill, F. E.; Warrick, A. L.; Wilde, C. H.; Wilson, D. C.
2014-02-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the ignition stage of inertial confinement fusion (ICF) implosions at NIF. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, single-sided tapers in gold. These apertures, which have triangular cross sections, produce distortions in the image, and the extended nature of the pinhole results in a non-stationary or spatially varying point spread function across the pinhole field of view. In this work, we have used iterative Maximum Likelihood techniques to remove the non-stationary distortions introduced by the aperture to reconstruct the underlying neutron source distributions. We present the detailed algorithms used for these reconstructions, the stopping criteria used and reconstructed sources from data collected at NIF with a discussion of the neutron imaging performance in light of other diagnostics.
Variational Reconstruction of Left Cardiac Structure from CMR Images
Wan, Min; Huang, Wei; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Wan, Xiaofeng; Zhong, Liang
2015-01-01
Cardiovascular Disease (CVD), accounting for 17% of overall deaths in the USA, is the leading cause of death over the world. Advances in medical imaging techniques make the quantitative assessment of both the anatomy and function of heart possible. The cardiac modeling is an invariable prerequisite for quantitative analysis. In this study, a novel method is proposed to reconstruct the left cardiac structure from multi-planed cardiac magnetic resonance (CMR) images and contours. Routine CMR examination was performed to acquire both long axis and short axis images. Trained technologists delineated the endocardial contours. Multiple sets of two dimensional contours were projected into the three dimensional patient-based coordinate system and registered to each other. The union of the registered point sets was applied a variational surface reconstruction algorithm based on Delaunay triangulation and graph-cuts. The resulting triangulated surfaces were further post-processed. Quantitative evaluation on our method was performed via computing the overlapping ratio between the reconstructed model and the manually delineated long axis contours, which validates our method. We envisage that this method could be used by radiographers and cardiologists to diagnose and assess cardiac function in patients with diverse heart diseases. PMID:26689551
Variational Reconstruction of Left Cardiac Structure from CMR Images.
Wan, Min; Huang, Wei; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Wan, Xiaofeng; Zhong, Liang
2015-01-01
Cardiovascular Disease (CVD), accounting for 17% of overall deaths in the USA, is the leading cause of death over the world. Advances in medical imaging techniques make the quantitative assessment of both the anatomy and function of heart possible. The cardiac modeling is an invariable prerequisite for quantitative analysis. In this study, a novel method is proposed to reconstruct the left cardiac structure from multi-planed cardiac magnetic resonance (CMR) images and contours. Routine CMR examination was performed to acquire both long axis and short axis images. Trained technologists delineated the endocardial contours. Multiple sets of two dimensional contours were projected into the three dimensional patient-based coordinate system and registered to each other. The union of the registered point sets was applied a variational surface reconstruction algorithm based on Delaunay triangulation and graph-cuts. The resulting triangulated surfaces were further post-processed. Quantitative evaluation on our method was performed via computing the overlapping ratio between the reconstructed model and the manually delineated long axis contours, which validates our method. We envisage that this method could be used by radiographers and cardiologists to diagnose and assess cardiac function in patients with diverse heart diseases. PMID:26689551
Neutron source reconstruction from pinhole imaging at National Ignition Facility
Volegov, P.; Danly, C. R.; Grim, G. P.; Guler, N.; Merrill, F. E.; Wilde, C. H.; Wilson, D. C.; Fittinghoff, D. N.; Izumi, N.; Ma, T.; Warrick, A. L.
2014-02-15
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the ignition stage of inertial confinement fusion (ICF) implosions at NIF. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, single-sided tapers in gold. These apertures, which have triangular cross sections, produce distortions in the image, and the extended nature of the pinhole results in a non-stationary or spatially varying point spread function across the pinhole field of view. In this work, we have used iterative Maximum Likelihood techniques to remove the non-stationary distortions introduced by the aperture to reconstruct the underlying neutron source distributions. We present the detailed algorithms used for these reconstructions, the stopping criteria used and reconstructed sources from data collected at NIF with a discussion of the neutron imaging performance in light of other diagnostics.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855
Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.
2009-11-15
Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.
Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images
NASA Astrophysics Data System (ADS)
Kruschwitz, Jennifer D. T.
Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.
Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms
NASA Astrophysics Data System (ADS)
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-04-01
Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.
Reconstruction of 3D scenes from sequences of images
NASA Astrophysics Data System (ADS)
Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa
2013-08-01
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3
NASA Astrophysics Data System (ADS)
Khorsandi, M.; Feghhi, S. A. H.
2015-08-01
In industrial Gamma-ray CT, specifically for large-dimension plants or processes, the simplicity and portability of CT system necessitate to use individual gamma-ray detectors for imaging purposes. Considering properties of the gamma-ray source as well as characteristics of the detectors, including penetration depth, energy resolution, size, etc., the quality of reconstructed images is limited. Therefore, implementation of an appropriate reconstruction procedure is important to improve the image quality. In this paper, an accurate and applicable procedure has been proposed for image reconstruction of Gamma-ray CT of large-dimension industrial plants. Additionally, a portable configuration of tomographic system was introduced and simulated in MCNPX Monte Carlo code. The simulation results were validated through comparison with the experimental results reported in the literature. Evaluations showed that maximum difference between reconstruction error in this work and the benchmark was less than 1.3%. Additional investigation has been carried out on a typical standard phantom introduced by IAEA using the validated procedure. Image quality assessment showed that the reconstruction error was less than 1.7% using different algorithms and a good contrast higher than 76% was obtained. Our overall results are indicative of the fact that the procedures and methods introduced in this work are quite efficient for improving the image quality of gamma CT of industrial plants.
Task-based optimization of image reconstruction in breast CT
NASA Astrophysics Data System (ADS)
Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2014-03-01
We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.
Super-resolution Reconstruction for Tongue MR Images
Woo, Jonghye; Bai, Ying; Roy, Snehashis; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Magnetic resonance (MR) images of the tongue have been used in both clinical medicine and scientific research to reveal tongue structure and motion. In order to see different features of the tongue and its relation to the vocal tract it is beneficial to acquire three orthogonal image stacks—e.g., axial, sagittal and coronal volumes. In order to maintain both low noise and high visual detail, each set of images is typically acquired with in-plane resolution that is much better than the through-plane resolution. As a result, any one data set, by itself, is not ideal for automatic volumetric analyses such as segmentation and registration or even for visualization when oblique slices are required. This paper presents a method of super-resolution reconstruction of the tongue that generates an isotropic image volume using the three orthogonal image stacks. The method uses preprocessing steps that include intensity matching and registration and a data combination approach carried out by Markov random field optimization. The performance of the proposed method was demonstrated on five clinical datasets, yielding superior results when compared with conventional reconstruction methods. PMID:27239084
The feasibility of images reconstructed with the method of sieves
Veklerov, E.; Llacer, J.
1989-04-01
The concept of sieves has been applied with the Maximum likelihood Estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered. 12 refs., 3 figs., 2 tabs.
Iterative Self-Dual Reconstruction on Radar Image Recovery
Martins, Charles; Medeiros, Fatima; Ushizima, Daniela; Bezerra, Francisco; Marques, Regis; Mascarenhas, Nelson
2010-05-21
Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizes when applied to simulated and real SAR images in comparison with standard filters.
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Li, Ang; Zhang, Quan; Culver, Joseph P; Miller, Eric L; Boas, David A
2004-02-01
We present an algorithm to reconstruct chromosphere concentration images directly rather than following the traditional two-step process of reconstructing wavelength-dependent absorption coefficient images and then calculating chromosphere concentration images. This procedure imposes prior spectral information into the image reconstruction that results in a dramatic improvement in the image contrast-to-noise ratio of better than 100%. We demonstrate this improvement with simulations and a dynamic blood phantom experiment. PMID:14759043
Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images
NASA Astrophysics Data System (ADS)
Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.
2014-12-01
Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.
Isotope specific resolution recovery image reconstruction in high resolution PET imaging
Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib
2014-05-15
Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution
Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology
NASA Astrophysics Data System (ADS)
Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan
2016-05-01
This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.
Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner
NASA Astrophysics Data System (ADS)
Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.
2004-10-01
We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.
Fast two-dimensional super-resolution image reconstruction algorithm for ultra-high emitter density.
Huang, Jiaqing; Gumpper, Kristyn; Chi, Yuejie; Sun, Mingzhai; Ma, Jianjie
2015-07-01
Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters. PMID:26125349
Task-driven image acquisition and reconstruction in cone-beam CT
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Webster Stayman, J.; Ehtiati, Tina; Siewerdsen, Jeffrey H.
2015-04-01
This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d‧) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d‧ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d‧ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the
Task-driven image acquisition and reconstruction in cone-beam CT
Gang, Grace J.; Stayman, J. Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H.
2015-01-01
This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters and in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e., the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the
Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems
NASA Astrophysics Data System (ADS)
Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.
2015-08-01
Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.
An accurate method of extracting fat droplets in liver images for quantitative evaluation
NASA Astrophysics Data System (ADS)
Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2015-03-01
The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.
Image Reconstruction and Discrimination at Low Light Levels
NASA Astrophysics Data System (ADS)
Zerom, Petros
Quantum imaging is a recent and promising branch of quantum optics that exploits the quantum nature of light. Improving the limitations imposed by classical sources of light in optical imaging techniques or overcoming the classical boundaries of image formation is one of the key motivations in quantum imaging. In this thesis, I describe certain aspects of both quantum and thermal ghost imaging and I also study image discrimination with high fidelity at low light levels. First of all, I present a theoretical and experimental study of entangled-photon compressive ghost imaging. In quantum ghost imaging using entangled photon pairs, the brightness of readily available sources is rather weak. The usual technique of image acquisition in this imaging modality is to raster scan a single-pixel single-photon sensitive detector in one arm of a ghost imaging setup. In most imaging modalities, the number of measurements required to fully resolve an object is dependent on the measurement's Nyquist limit. In the first part of the thesis, I propose a ghost imaging (GI) configuration that uses bucket detectors (as opposed to a raster scanning detector) in both arms of the GI setup. High resolution image reconstruction using only 27% of the measurement's Nyquist limit using compressed sensing algorithms are presented. The second part of my thesis deals with thermal ghost imaging. Unlike in quantum GI, bright and spatially correlated classical sources of radiation are used in thermal GI. Usually high-contrast speckle patterns are used as sources of the correlated beams of radiation. I study the effect of the field statistics of the illuminating source on the quality of ghost images. I show theoretically and experimentally that a thermal GI setup can produce high quality images even when low-contrast (intensity-averaged) speckle patterns are used as an illuminating source, as long as the collected signal is mainly caused by the random fluctuation of the incident speckle field, as
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-01-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output. PMID:26430292
Reconstruction of an AFM image based on estimation of the tip shape
NASA Astrophysics Data System (ADS)
Yuan, Shuai; Luan, Fangjun; Song, Xiaoyu; Liu, Lianqing; Liu, Jifei
2013-10-01
From the viewpoint of mathematical morphology, an atomic force microscopy (AFM) image contains the distortion effect of the tip convolution on a real sample surface. If tip shape can be characterized accurately, mathematical deconvolution can be applied to reduce the distortion to obtain more precise AFM images. AFM image reconstruction has practical significance in nanoscale observation and manipulation technology. Among recent tip modeling algorithms, the blind tip evaluation algorithm based on mathematical morphology is widely used. However, it takes considerable computing time, and the noise threshold is hard to optimize. To tackle these problems, a new blind modeling method is proposed in this paper to accelerate the computation of the algorithm and realize the optimum threshold estimation to build a precise tip model. The simulation verifies the efficiency of the new algorithm by comparing the computing time with the original one. The calculated tip shape is also validated by comparison with the SEM image of the tip. Finally, the reconstruction of a carbon nanotube image based on the precise tip model illustrates the feasibility and validity of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-03-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-10-01
Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.
Application of DIRI dynamic infrared imaging in reconstructive surgery
NASA Astrophysics Data System (ADS)
Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier
2006-04-01
We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.
Electromagnetic testing and image reconstruction with flexible scanning tablets
NASA Astrophysics Data System (ADS)
Nishimura, Yoshihiro; Kanev, Kamen; Akira, Sasamoto; Suzuki, Takayuki; Inokawa, Hiroshi
2009-03-01
An eddy current testing (ECT) and an electromagnetic acoustic testing (EMAT) employ electromagnetic methods to induce an eddy current and to detect flaws on or within a sample without directly contacting it. ECT produces Lissajous curves, and EMAT gives us a time series of signal data, both of which can be directly displayed on nondestructive testing (NDT) equipment screens. Since the interpretation of such output is difficult for untrained persons, images need to be properly reconstructed and visualized. This could be carried out by single-probe 2/3D scanners with imaging capabilities or with array probes, but such equipment is often too large or heavy for ordinary on-site use. In this study, we introduce a flexible scanning tablet for on-site NDT and imaging of detected flaws. The flexible scanning tablet consists of a thin film or a paper with a digitally encoded coordinate system, applicable to flat and curved surfaces, that enables probe positions to be tracked by a specialized optical reader. We also discuss how ECT and EMAT probe coordinates and measurement data could be simultaneously derived and used for further image reconstruction and visualization.
Improved proton computed tomography by dual modality image reconstruction
Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild
2014-03-15
Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360
Spectral-overlap approach to multiframe superresolution image reconstruction.
Cohen, Edward; Picard, Richard H; Crabtree, Peter N
2016-05-20
Various techniques and algorithms have been developed to improve the resolution of sensor-aliased imagery captured with multiple subpixel-displaced frames on an undersampled pixelated image plane. These dealiasing algorithms are typically known as multiframe superresolution (SR), or geometric SR to emphasize the role of the focal-plane array. Multiple low-resolution (LR) aliased frames of the same scene are captured and allocated to a common high-resolution (HR) reconstruction grid, leading to the possibility of an alias-free reconstruction, as long as the HR sampling rate is above the Nyquist rate. Allocating LR-frame irradiances to HR frames requires the use of appropriate weights. Here we present a novel approach in the spectral domain to calculating exactly weights based on spatial overlap areas, which we call the spectral-overlap (SO) method. We emphasize that the SO method is not a spectral approach but rather an approach to calculating spatial weights that uses spectral decompositions to exploit the array properties of the HR and LR pixels. The method is capable of dealing with arbitrary aliasing factors and interframe motions consisting of in-plane translations and rotations. We calculate example reconstructed HR images (the inverse problem) from synthetic aliased images for integer and for fractional aliasing factors. We show the utility of the SO-generated overlap-area weights in both noniterative and iterative reconstructions with known or unknown aliasing factor. We show how the overlap weights can be used to generate the Green's function (pixel response function) for noniterative dealiasing. In addition, we show how the overlap-area weights can be used to generate synthetic aliased images (the forward problem). We compare the SO approach to the spatial-domain geometric approach of O'Rourke and find virtually identical high accuracy but with significant enhancements in speed for SO. We also compare the SO weights to interpolated weights and find that
Quantitative accuracy of MAP reconstruction for dynamic PET imaging in small animals
Cheng, Ju-Chieh (Kevin); Shoghi, Kooresh; Laforest, Richard
2012-01-01
observed to be better than that by 2D-FBP or 2D-OSEM. Using an image quality phantom containing hot spheres, the estimated activity concentration in the largest sphere has the expected concentration relative to the background area for all the MAP images. The obtained recovery coefficients have been also shown to be almost independent of the count density. 2D-FBP and 2D-OSEM do not perform as well, yielding recovery coefficients lower than those observed with 3D-MAP (approximately 33% lower for the smallest sphere). However, a small positive bias was observed in MAP reconstructed images for frames of very low count density. This bias is present in the uniform area for count density of less than 0.05 × 106 counts/ml. For the dynamic mouse study, it was observed that 3D-MAP (even gated at diastole) cannot predict accurately the blood activity concentration due to residual spill-over activity from the myocardium into the left ventricle (approximately 15%). However, 3D-MAP predicts blood activity concentration closer to blood sampling than 2D-FBP. Conclusions: The authors observed that 3D-MAP produces more accurate activity concentration estimates than 2D-FBP or 2D-OSEM at all practical levels of statistics and contrasts due to improved spatial resolution leading to lesser partial volume effect. PMID:22320813
Bakker, Chris J G; de Leeuw, Hendrik; van de Maat, Gerrit H; van Gorp, Jetse S; Bouwman, Job G; Seevinck, Peter R
2013-01-01
Lack of spatial accuracy is a recognized problem in magnetic resonance imaging (MRI) which severely detracts from its value as a stand-alone modality for applications that put high demands on geometric fidelity, such as radiotherapy treatment planning and stereotactic neurosurgery. In this paper, we illustrate the potential and discuss the limitations of spectroscopic imaging as a tool for generating purely phase-encoded MR images and parameter maps that preserve the geometry of an object and allow localization of object features in world coordinates. Experiments were done on a clinical system with standard facilities for imaging and spectroscopy. Images were acquired with a regular spin echo sequence and a corresponding spectroscopic imaging sequence. In the latter, successive samples of the acquired echo were used for the reconstruction of a series of evenly spaced images in the time and frequency domain. Experiments were done with a spatial linearity phantom and a series of test objects representing a wide range of susceptibility- and chemical-shift-induced off-resonance conditions. In contrast to regular spin echo imaging, spectroscopic imaging was shown to be immune to off-resonance effects, such as those caused by field inhomogeneity, susceptibility, chemical shift, f(0) offset and field drift, and to yield geometrically accurate images and parameter maps that allowed object structures to be localized in world coordinates. From these illustrative examples and a discussion of the limitations of purely phase-encoded imaging techniques, it is concluded that spectroscopic imaging offers a fundamental solution to the geometric deficiencies of MRI which may evolve toward a practical solution when full advantage will be taken of current developments with regard to scan time reduction. This perspective is backed up by a demonstration of the significant scan time reduction that may be achieved by the use of compressed sensing for a simple phantom. PMID:22898694
Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.
2014-01-01
Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging
Bourantas, Christos V; Kourtis, Iraklis C; Plissiti, Marina E; Fotiadis, Dimitrios I; Katsouras, Christos S; Papafaklis, Michail I; Michalis, Lampros K
2005-12-01
The aim of this study is to describe a new method for the three-dimensional reconstruction of coronary arteries and its quantitative validation. Our approach is based on the fusion of the data provided by intravascular ultrasound images (IVUS) and biplane angiographies. A specific segmentation algorithm is used for the detection of the regions of interest in intravascular ultrasound images. A new methodology is also introduced for the accurate extraction of the catheter path. In detail, a cubic B-spline is used for approximating the catheter path in each biplane projection. Each B-spline curve is swept along the normal direction of its X-ray angiographic plane forming a surface. The intersection of the two surfaces is a 3D curve, which represents the reconstructed path. The detected regions of interest in the IVUS images are placed perpendicularly onto the path and their relative axial twist is computed using the sequential triangulation algorithm. Then, an efficient algorithm is applied to estimate the absolute orientation of the first IVUS frame. In order to obtain 3D visualization the commercial package Geomagic Studio 4.0 is used. The performance of the proposed method is assessed using a validation methodology which addresses the separate validation of each step followed for obtaining the coronary reconstruction. The performance of the segmentation algorithm was examined in 80 IVUS images. The reliability of the path extraction method was studied in vitro using a metal wire model and in vivo in a dataset of 11 patients. The performance of the sequential triangulation algorithm was tested in two gutter models and in the coronary arteries (marked with metal clips) of six cadaveric sheep hearts. Finally, the accuracy in the estimation of the first IVUS frame absolute orientation was examined in the same set of cadaveric sheep hearts. The obtained results demonstrate that the proposed reconstruction method is reliable and capable of depicting the morphology of
System calibration and image reconstruction for a new small-animal SPECT system
NASA Astrophysics Data System (ADS)
Chen, Yi-Chun
A novel small-animal SPECT imager, FastSPECT II, was recently developed at the Center for Gamma-Ray Imaging. FastSPECT II consists of two rings of eight modular scintillation cameras and list-mode data-acquisition electronics that enable stationary and dynamic imaging studies. The instrument is equipped with exchangeable aperture assemblies and adjustable camera positions for selections of magnifications, pinhole sizes, and fields of view (FOVs). The purpose of SPECT imaging is to recover the radiotracer distribution in the object from the measured image data. Accurate knowledge of the imaging system matrix (referred to as H) is essential for image reconstruction. To assure that all of the system physics is contained in the matrix, experimental calibration methods for the individual cameras and the whole imaging system were developed and carefully performed. The average spatial resolution over the FOV of FastSPECT II in its low-magnification (2.4X) configuration is around 2.4 mm, computed from the Fourier crosstalk matrix. The system sensitivity measured with a 99mTc point source at the center of the FOV is about 267 cps/MBq. The system detectability was evaluated by computing the ideal-observer performance on SKE/BKE (signal-known-exactly/background-known-exactly) detection tasks. To reduce the system-calibration time and achieve finer reconstruction grids, two schemes for interpolating H were implemented and compared: these are centroid interpolation with Gaussian fitting and Fourier interpolation. Reconstructed phantom and mouse-cardiac images demonstrated the effectiveness of the H-matrix interpolation. Tomographic reconstruction can be formulated as a linear inverse problem and solved using statistical-estimation techniques. Several iterative reconstruction algorithms were introduced, including maximum-likelihood expectation-maximization (ML-EM) and its ordered-subsets (OS) version, and some least-squares (LS) and weighted-least-squares (WLS) algorithms such
Image reconstruction for the ClearPET™ Neuro
NASA Astrophysics Data System (ADS)
Weber, Simone; Morel, Christian; Simon, Luc; Krieguer, Magalie; Rey, Martin; Gundlich, Brigitte; Khodaverdi, Maryam
2006-12-01
ClearPET™ is a family of small-animal PET scanners which are currently under development within the Crystal Clear Collaboration (CERN). All scanners are based on the same detector block design using individual LSO and LuYAP crystals in phoswich configuration, coupled to multi-anode photomultiplier tubes. One of the scanners, the ClearPET™ Neuro is designed for applications in neuroscience. Four detector blocks with 64 2×2×10 mm LSO and LuYAP crystals, arranged in line, build a module. Twenty modules are arranged in a ring with a ring diameter of 13.8 cm and an axial size of 11.2 cm. An insensitive region at the border of the detector heads results in gaps between the detectors axially and tangentially. The detectors are rotating by 360° in step and shoot mode during data acquisition. Every second module is shifted axially to compensate partly for the gaps between the detector blocks in a module. This unconventional scanner geometry requires dedicated image reconstruction procedures. Data acquisition acquires single events that are stored with a time mark in a dedicated list mode format. Coincidences are associated off line by software. After sorting the data into 3D sinograms, image reconstruction is performed using the Ordered Subset Maximum A Posteriori One-Step Late (OSMAPOSL) iterative algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Due to the non-conventional scanner design, careful estimation of the sensitivity matrix is needed to obtain artifact-free images from the ClearPET™ Neuro.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Modeling and image reconstruction in spectrally resolved bioluminescence tomography
NASA Astrophysics Data System (ADS)
Dehghani, Hamid; Pogue, Brian W.; Davis, Scott C.; Patterson, Michael S.
2007-02-01
Recent interest in modeling and reconstruction algorithms for Bioluminescence Tomography (BLT) has increased and led to the general consensus that non-spectrally resolved intensity-based BLT results in a non-unique problem. However, the light emitted from, for example firefly Luciferase, is widely distributed over the band of wavelengths from 500 nm to 650 nm and above, with the dominant fraction emitted from tissue being above 550 nm. This paper demonstrates the development of an algorithm used for multi-wavelength 3D spectrally resolved BLT image reconstruction in a mouse model. It is shown that using a single view data, bioluminescence sources of up to 15 mm deep can be successfully recovered given correct information about the underlying tissue absorption and scatter.
Incremental volume reconstruction and rendering for 3-D ultrasound imaging
NASA Astrophysics Data System (ADS)
Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry
1992-09-01
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments
Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin
2016-01-01
Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529
Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin
2016-01-01
Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529
Image and Data-analysis Tools For Paleoclimatic Reconstructions
NASA Astrophysics Data System (ADS)
Pozzi, M.
It comes here proposed a directory of instruments and computer science resources chosen in order to resolve the problematic ones that regard the paleoclimatic recon- structions. They will come discussed in particular the following points: 1) Numerical analysis of paleo-data (fossils abundances, species analyses, isotopic signals, chemical-physical parameters, biological data): a) statistical analyses (uni- variate, diversity, rarefaction, correlation, ANOVA, F and T tests, Chi^2) b) multidi- mensional analyses (principal components, corrispondence, cluster analysis, seriation, discriminant, autocorrelation, spectral analysis) neural analyses (backpropagation net, kohonen feature map, hopfield net genetic algorithms) 2) Graphical analysis (visu- alization tools) of paleo-data (quantitative and qualitative fossils abundances, species analyses, isotopic signals, chemical-physical parameters): a) 2-D data analyses (graph, histogram, ternary, survivorship) b) 3-D data analyses (direct volume rendering, iso- surfaces, segmentation, surface reconstruction, surface simplification,generation of tetrahedral grids). 3) Quantitative and qualitative digital image analysis (macro and microfossils image analysis, Scanning Electron Microscope. and Optical Polarized Microscope images capture and analysis, morphometric data analysis, 3-D reconstruc- tions): a) 2D image analysis (correction of image defects, enhancement of image de- tail, converting texture and directionality to grey scale or colour differences, visual enhancement using pseudo-colour, pseudo-3D, thresholding of image features, binary image processing, measurements, stereological measurements, measuring features on a white background) b) 3D image analysis (basic stereological procedures, two dimen- sional structures; area fraction from the point count, volume fraction from the point count, three dimensional structures: surface area and the line intercept count, three dimensional microstructures; line length and the
Spectral image reconstruction by a tunable LED illumination
NASA Astrophysics Data System (ADS)
Lin, Meng-Chieh; Tsai, Chen-Wei; Tien, Chung-Hao
2013-09-01
Spectral reflectance estimation of an object via low-dimensional snapshot requires both image acquisition and a post numerical estimation analysis. In this study, we set up a system incorporating a homemade cluster of LEDs with spectral modulation for scene illumination, and a multi-channel CCD to acquire multichannel images by means of fully digital process. Principal component analysis (PCA) and pseudo inverse transformation were used to reconstruct the spectral reflectance in a constrained training set, such as Munsell and Macbeth Color Checker. The average reflectance spectral RMS error from 34 patches of a standard color checker were 0.234. The purpose is to investigate the use of system in conjunction with the imaging analysis for industry or medical inspection in a fast and acceptable accuracy, where the approach was preliminary validated.
Image reconstruction of FT-IR microspectrometric data
NASA Astrophysics Data System (ADS)
Lasch, Peter; Lewis, E. Neil; Kidder, Linda H.; Naumann, Dieter
2000-03-01
FT-IR microspectrometry, particularly in combination with digital imaging techniques shows great promise for in-vivo and ex-vivo medical diagnosis. The statement is based on the knowledge that this method delivers information of the chemical structure and composition of a sample and the fact that any disease is linked to changes in the molecular and structural composition of cells and tissues. Typically, these changes are highly specific for a given tissue structure and are therefore potentially detectable by FT-IR microspectrometry. In this paper we present several approaches for the representation of mid-infrared microspectroscopic data acquired with high spatial resolution by the use of a MCT focal plane array detector. The applicability of image reassembling methodologies like functional group analysis, image reconstruction based on factor analysis and artificial neural network analysis to the IR data is discussed.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Matthews, J. C.; Reader, A. J.; Angelis, G. I.; Zaidi, H.
2014-10-01
Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [15O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating
Kotasidis, F A; Matthews, J C; Reader, A J; Angelis, G I; Zaidi, H
2014-10-21
Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [(15)O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating
A generalized Fourier penalty in prior-image-based reconstruction for cross-platform imaging
NASA Astrophysics Data System (ADS)
Pourmorteza, A.; Siewerdsen, J. H.; Stayman, J. W.
2016-03-01
Sequential CT studies present an excellent opportunity to apply prior-image-based reconstruction (PIBR) methods that leverage high-fidelity prior imaging studies to improve image quality and/or reduce x-ray exposure in subsequent studies. One major obstacle in using PIBR is that the initial and subsequent studies are often performed on different scanners (e.g. diagnostic CT followed by CBCT for interventional guidance); this results in mismatch in attenuation values due to hardware and software differences. While improved artifact correction techniques can potentially mitigate such differences, the correction is often incomplete. Here, we present an alternate strategy where the PIBR itself is used to mitigate these differences. We define a new penalty for the previously introduced PIBR called Reconstruction of Difference (RoD). RoD differs from many other PIBRs in that it reconstructs only changes in the anatomy (vs. reconstructing the current anatomy). Direct regularization of the difference image in RoD provides an opportunity to selectively penalize spatial frequencies of the difference image (e.g. low frequency differences associated with attenuation offsets and shading artifacts) without interfering with the variations in unchanged background image. We leverage this flexibility and introduce a novel regularization strategy using a generalized Fourier penalty within the RoD framework and develop the modified reconstruction algorithm. We evaluate the performance of the new approach in both simulation studies and in physical CBCT test-bench data. We find that generalized Fourier penalty can be highly effective in reducing low-frequency x-ray artifacts through selective suppression of spatial frequencies in the reconstructed difference image.
Advances in the reconstruction of LBT LINC-NIRVANA images
NASA Astrophysics Data System (ADS)
La Camera, A.; Desiderá, G.; Arcidiacono, C.; Boccacci, P.; Bertero, M.
2007-09-01
Context: LINC-NIRVANA, the Fizeau interferometer of the Large Binocular Telescope (LBT), will require routine use of image reconstruction methods for data reduction. To this purpose our group has already developed the software package AIRY (Astronomical Image Restoration in interferometrY). Aims: Observations of a target, with different orientations of the baseline of LINC-NIRVANA, will provide images with different orientations with respect to the CCD camera. This rotation effect was not taken into account in our previous work. Therefore in this paper we propose a method able to compensate for the rotation of the field of view. Moreover we investigate acceleration techniques for reducing the computational burden of multiple image deconvolution. Methods: The basic method is a suitable modification of the Richardson-Lucy algorithm, also implementing an approach we proposed for reducing boundary effects. Acceleration techniques, proposed by Biggs & Andrews, are extended and applied to this new algorithm. Finally a method for estimating the unknown point spread function (PSF) by extracting and extrapolating the image of a reference star is developed and implemented. Results: The method introduced for compensating object rotation and reducing boundary effects, as well as its accelerated versions, are tested on simulated LINC-NIRVANA images, using the VLT image of the Crab Nebula as test object. The results are very promising. Moreover the method for PSFs extraction is tested on simulated images, derived from the LBT image of the galaxy NGC 6946 and obtained by convolving this image with PSFs computed by means of the numerical code LOST (Layer Oriented Simulation Tool).
Local Surface Reconstruction from MER images using Stereo Workstation
NASA Astrophysics Data System (ADS)
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL
Cardiac motion correction based on partial angle reconstructed images in x-ray CT
Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom
2015-05-15
Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogram with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of
Loomis, Eric; Grim, Gary; Wilde, Carl; Wilke, Mark; Wilson, Doug; Morgan, George; Tregillis, Ian; Clark, David; Finch, Joshua; Fittinghoff, D; Bower, D
2010-01-01
Development of analysis techniques for neutron imaging at the National Ignition Facility (NIF) is an important and difficult task for the detailed understanding or high neutron yield inertial confinement fusion (ICF) implosions. These methods, once developed, must provide accurate images of the hot and cold fuel so that information about the implosion, such as symmetry and areal density, can be extracted. We are currently considering multiple analysis pathways for obtaining this source distribution of neutrons given a measured pinhole image with a scintillator and camera system. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations [E. Loomis et al. IFSA 2009]. We are currently striving to apply the technique to real data by applying a series of realistic effects that will be present for experimental images. These include various sources of noise, misalignment uncertainties at both the source and image planes, as well as scintillator and camera blurring. Some tests on the quality of image reconstructions have also been performed based on point resolution and Legendre mode improvement of recorded images. So far, the method has proven sufficient to overcome most of these experimental effects with continued devlopment.
[Image reconstruction of conductivity on magnetoacoustic tomography with magnetic induction].
Li, Jingyu; Yin, Tao; Liu, Zhipeng; Xu, Guohui
2010-04-01
The electric characteristics such as impedance and conductivity of the organization will change in the case where pathological changes occurred in the biological tissue. The change in electric characteristics usually took place before the change in the density of tissues, and also, the difference in electric characteristics such as conductivity between normal tissue and pathological tissue is obvious. The method of magneto-acoustic tomography with magnetic induction is based on the theory of magnetic eddy current induction, the principle of vibration generation and acoustic transmission to get the boundary of the pathological tissue. The pathological change could be inspected by electricity characteristic imaging which is invasive to the tissue. In this study, a two-layer concentric spherical model is established to simulate the malignant tumor tissue surrounded by normal tissue mutual relations of the magneto-sound coupling effect and the coupling equations in the magnetic field are used to get the algorithms for reconstructing the conductivity. Simulation study is conducted to test the proposed model and validate the performance of the reconstructed algorithms. The result indicates that the use of signal processing method in this paper can image the conductivity boundaries of the sample in the scanning cross section. The computer simulating results validate the feasibility of applying the method of magneto-acoustic tomography with magnetic induction for malignant tumor imaging. PMID:20481330
An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.
Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan
2015-11-01
The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed. PMID:26259219
Maiti, Abhik; Chakravarty, Debashish
2016-01-01
3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376
Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
2000-01-01
This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.
NASA Astrophysics Data System (ADS)
Liu, Zhe; Zhang, Li; Jiang, Xiaolei
2012-10-01
Tomographic Gamma Scanning (TGS) is one of the non-destructive analysis technologies based on the principles of Emission Computed Tomography (ECT). Dedicated on imaging of the Gamma ray emission, TGS reveals radioactivity distributions for different radionuclides inside target objects such as nuclear waste barrels. Due to the special characteristics of TGS imaging geometry, namely, the relatively larger detector cell size and more remarkable view change in the imaging region, the line integral projection model widely used in ECT problems is no longer applicable for the radioactive intensity image reconstruction in TGS. The alternative Monte-Carlo based methods which calculate the detection efficiency at every detecting position for each voxel are effective and accurate, however time consuming. In this paper, we consider the geometrical detection efficiency of detector that is dependent on the detector-voxel relative position independently from the intrinsic detection efficiency. Further, a new geometrical correction method is proposed, where the voxel volume within the detector view is applied as the projection weight substituting the track length used in line integral model. And the geometrical detection efficiencies at different positions are analytically expressed by the volume integral on the voxel of geometrical point-source response function of the detector. Numerical simulations are taken and discussions are provided. The results show that the proposed method reduces the reconstruction errors compared to the line integral projection method while gaining better calculating efficiency and flexibility than former Monte-Carlo methods.
NASA Astrophysics Data System (ADS)
Mun, Songchol; Bao, Yuequan; Li, Hui
2015-11-01
The accurate estimation of dispersion curves has been a key issue for ensuring high quality in geophysical surface wave exploration. Many studies have been carried out on the generation of a high-resolution dispersion image from array measurements. In this study, the sparse signal representation and reconstruction techniques are employed to obtain the high resolution Rayleigh-wave dispersion image from seismic wave data. First, a sparse representation of the seismic wave data is introduced, in which the signal is assumed to be sparse in terms of wave speed. Then, the sparse signal is reconstructed by optimization using l1-norm regularization, which gives the signal amplitude spectrum as a function of wave speed. A dispersion image in the f-v domain is generated by arranging the sparse spectra for all frequency slices in the frequency range. Finally, to show the efficiency of the proposed approach, the Surfbar-2 field test data, acquired by B. Luke and colleagues at the University of Nevada Las Vegas, are analysed. By comparing the real-field dispersion image with the results from other methods, the high mode-resolving ability of the proposed approach is demonstrated, particularly for a case with strongly coherent modes.
Reconstructing the open-field magnetic geometry of solar corona using coronagraph images
NASA Astrophysics Data System (ADS)
Uritsky, Vadim M.; Davila, Joseph M.; Jones, Shaela; Burkepile, Joan
2015-04-01
The upcoming Solar Probe Plus and Solar Orbiter missions will provide an new insight into the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Physical interpretation of these observations will be dependent on the accurate reconstruction of the large-scale coronal magnetic field. We argue that such reconstruction can be performed using photospheric extrapolation codes constrained by white-light coronagraph images. The field extrapolation component of this project is featured in a related presentation by S. Jones et al. Here, we focus on our image-processing algorithms conducting an automated segmentation of coronal loop structures. In contrast to the previously proposed segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, our technique focuses on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. Coronagraph images are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction followed by an adaptive angular differentiation. An adjustable threshold is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to identify valid features against a noisy background. The extracted coronal features are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms. Two versions of the method optimized for processing ground-based (Mauna Loa Solar Observatory) and satellite-based (STEREO Cor1 and Cor2) coronagraph images are being developed.
Image reconstruction by the speckle-masking method.
Weigelt, G; Wirnitzer, B
1983-07-01
Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography. PMID:19718124
Fast Multigrid Techniques in Total Variation-Based Image Reconstruction
NASA Technical Reports Server (NTRS)
Oman, Mary Ellen
1996-01-01
Existing multigrid techniques are used to effect an efficient method for reconstructing an image from noisy, blurred data. Total Variation minimization yields a nonlinear integro-differential equation which, when discretized using cell-centered finite differences, yields a full matrix equation. A fixed point iteration is applied with the intermediate matrix equations solved via a preconditioned conjugate gradient method which utilizes multi-level quadrature (due to Brandt and Lubrecht) to apply the integral operator and a multigrid scheme (due to Ewing and Shen) to invert the differential operator. With effective preconditioning, the method presented seems to require Omicron(n) operations. Numerical results are given for a two-dimensional example.
Cardiac-state-driven CT image reconstruction algorithm for cardiac imaging
NASA Astrophysics Data System (ADS)
Cesmeli, Erdogan; Edic, Peter M.; Iatrou, Maria; Hsieh, Jiang; Gupta, Rajiv; Pfoh, Armin H.
2002-05-01
Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.
Medical Image Watermarking Technique for Accurate Tamper Detection in ROI and Exact Recovery of ROI
Eswaraiah, R.; Sreenivasa Reddy, E.
2014-01-01
In telemedicine while transferring medical images tampers may be introduced. Before making any diagnostic decisions, the integrity of region of interest (ROI) of the received medical image must be verified to avoid misdiagnosis. In this paper, we propose a novel fragile block based medical image watermarking technique to avoid embedding distortion inside ROI, verify integrity of ROI, detect accurately the tampered blocks inside ROI, and recover the original ROI with zero loss. In this proposed method, the medical image is segmented into three sets of pixels: ROI pixels, region of noninterest (RONI) pixels, and border pixels. Then, authentication data and information of ROI are embedded in border pixels. Recovery data of ROI is embedded into RONI. Results of experiments conducted on a number of medical images reveal that the proposed method produces high quality watermarked medical images, identifies the presence of tampers inside ROI with 100% accuracy, and recovers the original ROI without any loss. PMID:25328515
Detection and 3D reconstruction of traffic signs from multiple view color images
NASA Astrophysics Data System (ADS)
Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno
2013-03-01
3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for
NASA Astrophysics Data System (ADS)
Wu, Meng; Yang, Qiao; Maier, Andreas; Fahrig, Rebecca
2014-03-01
Polychromatic statistical reconstruction algorithms have very high computational demands due to the difficulty of the optimization problems and the large number of spectrum bins. We want to develop a more practical algorithm that has a simpler optimization problem, a faster numerical solver, and requires only a small amount of prior knowledge. In this paper, a modified optimization problem for polychromatic statistical reconstruction algorithms is proposed. The modified optimization problem utilizes the idea of determining scanned materials based on a first pass FBP reconstruction to fix the ratios between photoelectric and Compton scattering components of all image pixels. The reconstruction of a density image is easy to solve by a separable quadratic surrogate algorithm that is also applicable to the multi-material case. In addition, a spectrum binning method is introduced so that the full spectrum information is not required. The energy bins sizes and attenuations are optimized based on the true spectrum and object. With these approximations, the expected line integral values using only a few energy bins are very closed to the true polychromatic values. Thus both the problem size and computational demand caused by the large number of energy bins that are typically used to model a full spectrum are reduced. Simulation showed that three energy bins using the generalized spectrum binning method could provide an accurate approximation of the polychromatic X-ray signals. The average absolute error of the logarithmic detector signal is less than 0.003 for a 120 kVp spectrum. The proposed modified optimization problem and spectrum binning approach can effectively suppress beam hardening artifacts while providing low noise images.
Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging
Yu, Xingjian; Chen, Shuhang; Hu, Zhenghui; Liu, Meng; Chen, Yunmei; Shi, Pengcheng; Liu, Huafeng
2015-01-01
In dynamic Positron Emission Tomography (PET), an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets. PMID:26540274
Exogenous specific fluorescence marker location reconstruction using surface fluorescence imaging
NASA Astrophysics Data System (ADS)
Avital, Garashi; Gannot, Israel; Chernomordik, Victor V.; Gannot, Gallya; Gandjbakhche, Amir H.
2003-07-01
Diseased tissue may be specifically marked by an exogenous fluorescent marker and then, following laser activation of the marker, optically and non-invasively detected through fluorescence imaging. Interaction of a fluorophore, conjugated to an appropriate antibody, with the antigen expressed by the diseased tissue, can indicate the presence of a specific disease. Using an optical detection system and a reconstruction algorithm, we were able to determine the fluorophore"s position in the tissue. We present 3D reconstructions of the location of a fluorescent marker, FITC, in the tongues of mice. One group of BALB/c mice was injected with squamous cell carcinoma (SqCC) cell line to the tongue, while another group served as the control. After tumor development, the mice"s tongues were injected with FITC conjugated to anti-CD3 and anti-CD 19 antibodies. An Argon laser excited the marker at 488 nm while a high precision fluorescent camera collected the emitted fluorescence. Measurements were performed with the fluorescent marker embedded at various simulated depths. The simulation was performed using agarose-based gel slabs applied to the tongue as tissue-like phantoms. A biopsy was taken from every mouse after the procedure and the excised tissue was histologically evaluated. We reconstruct the fluorescent marker"s location in 3D using an algorithm based on the random walk theory.
Guo, En-Yu; Chawla, Nikhilesh; Jing, Tao; Torquato, Salvatore; Jiao, Yang
2014-03-01
Heterogeneous materials are ubiquitous in nature and synthetic situations and have a wide range of important engineering applications. Accurate modeling and reconstructing three-dimensional (3D) microstructure of topologically complex materials from limited morphological information such as a two-dimensional (2D) micrograph is crucial to the assessment and prediction of effective material properties and performance under extreme conditions. Here, we extend a recently developed dilation–erosion method and employ the Yeong–Torquato stochastic reconstruction procedure to model and generate 3D austenitic–ferritic cast duplex stainless steel microstructure containing percolating filamentary ferrite phase from 2D optical micrographs of the material sample. Specifically, the ferrite phase is dilated to produce a modified target 2D microstructure and the resulting 3D reconstruction is eroded to recover the percolating ferrite filaments. The dilation–erosion reconstruction is compared with the actual 3D microstructure, obtained from serial sectioning (polishing), as well as the standard stochastic reconstructions incorporating topological connectedness information. The fact that the former can achieve the same level of accuracy as the latter suggests that the dilation–erosion procedure is tantamount to incorporating appreciably more topological and geometrical information into the reconstruction while being much more computationally efficient. - Highlights: • Spatial correlation functions used to characterize filamentary ferrite phase • Clustering information assessed from 3D experimental structure via serial sectioning • Stochastic reconstruction used to generate 3D virtual structure 2D micrograph • Dilation–erosion method to improve accuracy of 3D reconstruction.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
Ahmad, Rizwan; Deng, Yuanmu; Vikram, Deepti S.; Clymer, Bradley; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan
2007-01-01
In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstructed image along with fast and reliable data acquisition is highly desirable for many biological applications. An accurate representation of uniform distribution of projection data is necessary to ensure high reconstruction quality. The current techniques for data acquisition suffer from nonuniformities or local anisotropies in the distribution of projection data and present a poor approximation of a true uniform and isotropic distribution. In this work, we have implemented a technique based on Quasi-Monte Carlo method to acquire projections with more uniform and isotropic distribution of data over a 3D acquisition space. The proposed technique exhibits improvements in the reconstruction quality in terms of both mean-square-error and visual judgment. The effectiveness of the suggested technique is demonstrated using computer simulations and 3D EPRI experiments. The technique is robust and exhibits consistent performance for different object configurations and orientations. PMID:17095271
Hussain, Fahad Ahmed; Mail, Noor; Shamy, Abdulrahman M; Suliman, Alghamdi; Saoudi, Abdelhamid
2016-01-01
Image quality is a key issue in radiology, particularly in a clinical setting where it is important to achieve accurate diagnoses while minimizing radiation dose. Some computed tomography (CT) manufacturers have introduced algorithms that claim significant dose reduction. In this study, we assessed CT image quality produced by two reconstruction algorithms provided with GE Healthcare's Discovery 690 Elite positron emission tomography (PET) CT scanner. Image quality was measured for images obtained at various doses with both conventional filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR) algorithms. A stan-dard CT dose index (CTDI) phantom and a pencil ionization chamber were used to measure the CT dose at 120 kVp and an exposure of 260 mAs. Image quality was assessed using two phantoms. CT images of both phantoms were acquired at tube voltage (kV) of 120 with exposures ranging from 25 mAs to 400 mAs. Images were reconstructed using FBP and ASIR ranging from 10% to 100%, then analyzed for noise, low-contrast detectability, contrast-to-noise ratio (CNR), and modulation transfer function (MTF). Noise was 4.6 HU in water phantom images acquired at 260 mAs/FBP 120 kV and 130 mAs/50% ASIR 120 kV. The large objects (fre-quency < 7 lp/cm) retained fairly acceptable image quality at 130 mAs/50% ASIR, compared to 260 mAs/FBP. The application of ASIR for small objects (frequency >7 lp/cm) showed poor visibility compared to FBP at 260 mAs and even worse for images acquired at less than 130 mAs. ASIR blending more than 50% at low dose tends to reduce contrast of small objects (frequency >7 lp/cm). We concluded that dose reduction and ASIR should be applied with close attention if the objects to be detected or diagnosed are small (frequency > 7 lp/cm). Further investigations are required to correlate the small objects (frequency > 7 lp/cm) to patient anatomy and clinical diagnosis. PMID:27167261
Sethurajan, Athinthra Krishnaswamy; Krachkovskiy, Sergey A; Halalay, Ion C; Goward, Gillian R; Protas, Bartosz
2015-09-17
We used NMR imaging (MRI) combined with data analysis based on inverse modeling of the mass transport problem to determine ionic diffusion coefficients and transference numbers in electrolyte solutions of interest for Li-ion batteries. Sensitivity analyses have shown that accurate estimates of these parameters (as a function of concentration) are critical to the reliability of the predictions provided by models of porous electrodes. The inverse modeling (IM) solution was generated with an extension of the Planck-Nernst model for the transport of ionic species in electrolyte solutions. Concentration-dependent diffusion coefficients and transference numbers were derived using concentration profiles obtained from in situ (19)F MRI measurements. Material properties were reconstructed under minimal assumptions using methods of variational optimization to minimize the least-squares deviation between experimental and simulated concentration values with uncertainty of the reconstructions quantified using a Monte Carlo analysis. The diffusion coefficients obtained by pulsed field gradient NMR (PFG-NMR) fall within the 95% confidence bounds for the diffusion coefficient values obtained by the MRI+IM method. The MRI+IM method also yields the concentration dependence of the Li(+) transference number in agreement with trends obtained by electrochemical methods for similar systems and with predictions of theoretical models for concentrated electrolyte solutions, in marked contrast to the salt concentration dependence of transport numbers determined from PFG-NMR data. PMID:26247105
Human tooth and root canal morphology reconstruction using magnetic resonance imaging
DRĂGAN, OANA CARMEN; FĂRCĂŞANU, ALEXANDRU ŞTEFAN; CÂMPIAN, RADU SEPTIMIU; TURCU, ROMULUS VALERIU FLAVIU
2016-01-01
Background and aims Visualization of the internal and external root canal morphology is very important for a successful endodontic treatment; however, it seems to be difficult considering the small size of the tooth and the complexity of the root canal system. Film-based or digital conventional radiographic techniques as well as cone beam computed tomography provide limited information on the dental pulp anatomy or have harmful effects. A new non-invasive diagnosis tool is magnetic resonance imaging, due to its ability of imaging both hard and soft tissues. The aim of this study was to demonstrate magnetic resonance imaging to be a useful tool for imaging the anatomic conditions of the external and internal root canal morphology for endodontic purposes. Methods The endodontic system of one freshly extracted wisdom tooth, chosen for its well-known anatomical variations, was mechanically shaped using a hybrid technique. After its preparation, the tooth was immersed into a recipient with saline solution and magnetic resonance imaged immediately. A Bruker Biospec magnetic resonance imaging scanner operated at 7.04 Tesla and based on Avance III radio frequency technology was used. InVesalius software was employed for the 3D reconstruction of the tooth scanned volume. Results The current ex-vivo experiment shows the accurate 3D volume rendered reconstruction of the internal and external morphology of a human extracted and endodontically treated tooth using a dataset of images acquired by magnetic resonance imaging. The external lingual and vestibular views of the tooth as well as the occlusal view of the pulp chamber, the access cavity, the distal canal opening on the pulp chamber floor, the coronal third of the root canals, the degree of root separation and the apical fusion of the two mesial roots, details of the apical region, root canal curvatures, furcal region and interradicular root grooves could be clearly bordered. Conclusions Magnetic resonance imaging offers 3
Duval, Joseph S.
1985-01-01
Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.
4D reconstruction of the past: the image retrieval and 3D model construction pipeline
NASA Astrophysics Data System (ADS)
Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro
2014-08-01
One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.
Incomplete-data CT image reconstructions in industrial applications
NASA Astrophysics Data System (ADS)
Tam, K. C.; Eberhard, J. W.; Mitchell, K. W.
1990-06-01
In industrial X-ray computerized tomography (CT), the objects to be inspected are usually very attenuating to X-rays, and their shape may not permit complete scannings at all view angles; incomplete-data imaging situations usually result. Image reconstruction from incomplete data can be achieved through an iterative transform algorithm, which utilizes the a priori information on the object to compensate for the missing data. The results of validating the iterative transform algorithm on experimental data from a cross section of a high-pressure turbine blade made of Ni-based superalloy are reported. From the data set, two kinds of incomplete data situations are simulated: incomplete projection and limited-angle scanning. The results indicate that substantial improvements, both visually and in wall thickness measurements, were brought about in all cases through the use of the iterative transform algorithm.
Reconstruction of mechanically recorded sound by image processing
Fadeyev, Vitaliy; Haber, Carl
2003-03-26
Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with no or minimal contact, by measuring the groove shape using precision metrology methods and digital image processing. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Various aspects of this approach are discussed. A feasibility test is reported which used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record. Comparisons are presented with stylus playback of the record and with a digitally re-mastered version of the original magnetic recording. A more extensive implementation of this approach, with dedicated hardware and software, is considered.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Bayesian Super-Resolved Surface Reconstruction From Multiple Images
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Cheesman, P.; Maluf, D. A.; Morris, R. D.; Swanson, Keith (Technical Monitor)
1999-01-01
Bayesian inference has been wed successfully for many problems where the aim is to infer the parameters of a model of interest. In this paper we formulate the three dimensional reconstruction problem as the problem of inferring the parameters of a surface model from image data, and show how Bayesian methods can be used to estimate the parameters of this model given the image data. Thus we recover the three dimensional description of the scene. This approach also gives great flexibility. We can specify the geometrical properties of the model to suit our purpose, and can also use different models for how the surface reflects the light incident upon it. In common with other Bayesian inference problems, the estimation methodology requires that we can simulate the data that would have been recoded for any values of the model parameters. In this application this means that if we have image data we must be able to render the surface model. However it also means that we can infer the parameters of a model whose resolution can be chosen irrespective of the resolution of the images, and may be super-resolved. We present results of the inference of surface models from simulated aerial photographs for the case of super-resolution, where many surface elements project into a single pixel in the low-resolution images.
A High Precision Terahertz Wave Image Reconstruction Algorithm
Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang
2016-01-01
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269
A High Precision Terahertz Wave Image Reconstruction Algorithm.
Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang
2016-01-01
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269
High-quality image reconstruction method for ptychography with partially coherent illumination
NASA Astrophysics Data System (ADS)
Yu, Wei; Wang, Shouyu; Veetil, Suhas; Gao, Shumei; Liu, Cheng; Zhu, Jianqiang
2016-06-01
The influence of partial coherence on the image reconstruction in ptychography is analyzed, and a simple method is proposed to reconstruct a clear image for the weakly scattering object with partially coherent illumination. It is demonstrated numerically and experimentally that by illuminating a weakly scattering object with a divergent radiation beam, and doing the reconstruction only from the bright-field diffraction data, the mathematical ambiguity and corresponding reconstruction errors related to the partial coherency can be remarkably suppressed, thus clear reconstructed images can be generated even under seriously incoherent illumination.
NASA Astrophysics Data System (ADS)
Liu, Baodong; Wang, Ge; Ritman, Erik L.; Cao, Guohua; Lu, Jianping; Zhou, Otto; Zeng, Li; Yu, Hengyong
2011-10-01
A multisource x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultrafast micro-CT for dynamic small animal imaging, and two methods are employed to perform interior reconstruction from a limited number of projections collected by the multisource interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows that it is possible to make an ultrafast micro-CT using a multisource interior x-ray system scheme combined with the state-of-the-art interior tomography.
3D Reconstruction of virtual colon structures from colonoscopy images.
Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C
2014-01-01
This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230
A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction
Konkle, Justin J.; Goodwill, Patrick W.; Hensley, Daniel W.; Orendorff, Ryan D.; Lustig, Michael; Conolly, Steven M.
2015-01-01
Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications. PMID:26495839
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
NASA Astrophysics Data System (ADS)
Juste, B.; Miró, R.; Verdú, G.; Santos, A.
2014-06-01
This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.
Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang
2015-01-01
Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research. PMID:26756406
A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image.
Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping
2016-01-01
Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289
A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image
Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping
2016-01-01
Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289
Pesavento, J B; Morgan, D; Bermingham, R; Zamora, D; Chromy, B; Segelke, B; Coleman, M; Xing, L; Cheng, H; Bench, G; Hoeprich, P
2007-06-07
Nanolipoprotein particles (NLPs) are small 10-20 nm diameter assemblies of apolipoproteins and lipids. At Lawrence Livermore National Laboratory (LLNL), they have constructed multiple variants of these assemblies. NLPs have been generated from a variety of lipoproteins, including apolipoprotein Al, apolipophorin III, apolipoprotein E4 22K, and MSP1T2 (nanodisc, Inc.). Lipids used included DMPC (bulk of the bilayer material), DMPE (in various amounts), and DPPC. NLPs were made in either the absence or presence of the detergent cholate. They have collected electron microscopy data as a part of the characterization component of this research. Although purified by size exclusion chromatography (SEC), samples are somewhat heterogeneous when analyzed at the nanoscale by negative stained cryo-EM. Images reveal a broad range of shape heterogeneity, suggesting variability in conformational flexibility, in fact, modeling studies point to dynamics of inter-helical loop regions within apolipoproteins as being a possible source for observed variation in NLP size. Initial attempts at three-dimensional reconstructions have proven to be challenging due to this size and shape disparity. They are pursuing a strategy of computational size exclusion to group particles into subpopulations based on average particle diameter. They show here results from their ongoing efforts at statistically and computationally subdividing NLP populations to realize greater homogeneity and then generate 3D reconstructions.
Redler, Gage; Qiao, Zhiwei; Epel, Boris; Halpern, Howard J.
2015-01-01
The importance of tissue oxygenation has led to a great interest in methods for imaging pO2 in vivo. Electron paramagnetic resonance imaging (EPRI) provides noninvasive, near absolute 1 mm-resolved 3D images of pO2 in the tissues and tumors of living animals. Current EPRI image reconstruction methods tend to be time consuming and preclude real-time visualization of information. Methods are presented to significantly accelerate the reconstruction process in order to enable real-time reconstruction of EPRI pO2 images. These methods are image reconstruction using graphics processing unit (GPU)-based 3D filtered back-projection and lookup table parameter fitting. The combination of these methods leads to acceleration factors of over 650 compared to current methods and allows for real-time reconstruction of EPRI images of pO2 in vivo. PMID:26167137
Electron Trajectory Reconstruction for Advanced Compton Imaging of Gamma Rays
NASA Astrophysics Data System (ADS)
Plimley, Brian Christopher
Gamma-ray imaging is useful for detecting, characterizing, and localizing sources in a variety of fields, including nuclear physics, security, nuclear accident response, nuclear medicine, and astronomy. Compton imaging in particular provides sensitivity to weak sources and good angular resolution in a large field of view. However, the photon origin in a single event sequence is normally only limited to the surface of a cone. If the initial direction of the Compton-scattered electron can be measured, the cone can be reduced to a cone segment with width depending on the uncertainty in the direction measurement, providing a corresponding increase in imaging sensitivity. Measurement of the electron's initial direction in an efficient detection material requires very fine position resolution due to the electron's short range and tortuous path. A thick (650 mum), fully-depleted charge-coupled device (CCD) developed for infrared astronomy has 10.5-mum position resolution in two dimensions, enabling the initial trajectory measurement of electrons of energy as low as 100 keV. This is the first time the initial trajectories of electrons of such low energies have been measured in a solid material. In this work, the CCD's efficacy as a gamma-ray detector is demonstrated experimentally, using a reconstruction algorithm to measure the initial electron direction from the CCD track image. In addition, models of fast electron interaction physics, charge transport and readout were used to generate modeled tracks with known initial direction. These modeled tracks allowed the development and refinement of the reconstruction algorithm. The angular sensitivity of the reconstruction algorithm is evaluated extensively with models for tracks below 480 keV, showing a FWHM as low as 20° in the pixel plane, and 30° RMS sensitivity to the magnitude of the out-of-plane angle. The measurement of the trajectories of electrons with energies as low as 100 keV have the potential to make electron
Accurate calibration of a stereo-vision system in image-guided radiotherapy.
Liu, Dezhi; Li, Shidong
2006-11-01
Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system. PMID:17153416
Accurate calibration of a stereo-vision system in image-guided radiotherapy
Liu Dezhi; Li Shidong
2006-11-15
Image-guided radiotherapy using a three-dimensional (3D) camera as the on-board surface imaging system requires precise and accurate registration of the 3D surface images in the treatment machine coordinate system. Two simple calibration methods, an analytical solution as three-point matching and a least-squares estimation method as multipoint registration, were introduced to correlate the stereo-vision surface imaging frame with the machine coordinate system. Both types of calibrations utilized 3D surface images of a calibration template placed on the top of the treatment couch. Image transformational parameters were derived from corresponding 3D marked points on the surface images to their given coordinates in the treatment room coordinate system. Our experimental results demonstrated that both methods had provided the desired calibration accuracy of 0.5 mm. The multipoint registration method is more robust particularly for noisy 3D surface images. Both calibration methods have been used as our weekly QA tools for a 3D image-guided radiotherapy system.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Three-dimensional image reconstruction for PET by multi-slice rebinning and axial image filtering.
Lewittt, R M; Muehllehner, G; Karpt, J S
1994-03-01
A fast method is described for reconstructing volume images from three-dimensional (3D) coincidence data in positron emission tomography (PET). The reconstruction method makes use of all coincidence data acquired by high-sensitivity PET systems that do not have inter-slice absorbers (septa) to restrict the axial acceptance angle. The reconstruction method requires only a small amount of storage and computation, making it well suited for dynamic and whole-body studies. The method consists of three steps: (i) rebinning of coincidence data into a stack of 2D sinograms; (ii) slice-by-slice reconstruction of the sinogram associated with each slice to produce a preliminary 3D image having strong blurring in the axial (z) direction, but with different blurring at different z positions; and (iii) spatially variant filtering of the 3D image in the axial direction (i.e. 1D filtering in z for each x-y column) to produce the final image. The first step involves a new form of the rebinning operation in which multiple sinograms are incremented for each oblique coincidence line (multi-slice rebinning). The axial filtering step is formulated and implemented using the singular value decomposition (SVD). The method has been applied successfully to simulated data and to measured data for different kinds of phantom (multiple point sources, multiple discs, a cylinder with cold spheres, and a 3D brain phantom). PMID:15551583
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Image improvement and three-dimensional reconstruction using holographic image processing
NASA Technical Reports Server (NTRS)
Stroke, G. W.; Halioua, M.; Thon, F.; Willasch, D. H.
1977-01-01
Holographic computing principles make possible image improvement and synthesis in many cases of current scientific and engineering interest. Examples are given for the improvement of resolution in electron microscopy and 3-D reconstruction in electron microscopy and X-ray crystallography, following an analysis of optical versus digital computing in such applications.
Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images
NASA Astrophysics Data System (ADS)
Deng, Yuanbo; Chu, Daping
2016-03-01
The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
NASA Astrophysics Data System (ADS)
Andreyev, Andriy
2016-03-01
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
Trentacosta, Natasha; Fillar, Allison Liefeld; Liefeld, Cynthia Pierce; Hossack, Michael D.; Levy, I. Martin
2014-01-01
Background: Surgical reconstruction of the anterior cruciate ligament (ACL) can be complicated by incorrect and variable tunnel placement, graft tunnel mismatch, cortical breaches, and inadequate fixation due to screw divergence. This is the first report describing the use of a C-arm with image intensifier employed for the sole purpose of eliminating those complications during transtibial ACL reconstruction. Purpose: To determine if the use of a C-arm with image intensifier during arthroscopically assisted transtibial ACL reconstruction (IIAA-TACLR) eliminated common complications associated with bone–patellar tendon–bone ACL reconstruction, including screw divergence, cortical breaches, graft-tunnel mismatch, and improper positioning of the femoral and tibial tunnels. Study Design: Case series; Level of evidence, 4. Methods: A total of 110 consecutive patients (112 reconstructed knees) underwent identical IIAA-TACLR using a bone–patellar tendon–bone autograft performed by a single surgeon. Intra- and postoperative radiographic images and operative reports were evaluated for each patient looking for evidence of cortical breeching and screw divergence. Precision of femoral tunnel placement was evaluated using a sector map modified from Bernard et al. Graft recession distance and tibial α angles were recorded. Results: There were no femoral or tibial cortical breaches noted intraoperatively or on postoperative images. There were no instances of loss of fixation screw major thread engagement. There were no instances of graft-tunnel mismatch. The positions of the femoral tunnels were accurate and precise, falling into the desired sector of our location map (sector 1). Tibial α angles and graft recession distances varied widely. Conclusion: The use of the C-arm with image intensifier enabled accurate and precise tunnel placement and completely eliminated cortical breach, graft-tunnel mismatch, and screw divergence during IIAA-TACLR by allowing incremental
NASA Astrophysics Data System (ADS)
Liang, Zhiting; Guan, Yong; Liu, Gang; Bian, Rui; Zhang, Xiaobo; Xiong, Ying; Tian, Yangchao
2013-09-01
Nano-CT has been considered as an important technique applied in analyzing inter-structures of nanomaterials and biological cell. However, maximum rotation angle of the sample stage is limited by sample space; meanwhile, the scan time is exorbitantly large to get enough projections in some cases. Therefore, it is difficult to acquire nano-CT images with high quality by using conventional Fourier reconstruction methods based on limited-angle or few-view projections. In this paper, we utilized the total variation (TV) iterative reconstruction to carry out numerical image and nano-CT image reconstruction with limited-angle and few-view data. The results indicated that better quality images had been achieved.
An adaptive total variation image reconstruction method for speckles through disordered media
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei
2013-09-01
Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Restoration of singularities in reconstructed phase of crystal image in electron holography.
Li, Wei; Tanji, Takayoshi
2014-12-01
Off-axis electron holography can be used to measure the inner potential of a specimen from its reconstructed phase image and is thus a powerful technique for materials scientists. However, abrupt reversals of contrast from white to black may sometimes occur in a digitally reconstructed phase image, which results in inaccurate information. Such phase distortion is mainly due to the digital reconstruction process and weak electron wave amplitude in some areas of the specimen. Therefore, digital image processing can be applied to the reconstruction and restoration of phase images. In this paper, fringe reconnection processing is applied to phase image restoration of a crystal structure image. The disconnection and wrong connection of interference fringes in the hologram that directly cause a 2π phase jump imperfection are correctly reconnected. Experimental results show that the phase distortion is significantly reduced after the processing. The quality of the reconstructed phase image was improved by the removal of imperfections in the final phase. PMID:25272997
A nonlinear image reconstruction technique for ECT using a combined neural network approach
NASA Astrophysics Data System (ADS)
Marashdeh, Q.; Warsito, W.; Fan, L.-S.; Teixeira, F. L.
2006-08-01
A combined multilayer feed-forward neural network (MLFF-NN) and analogue Hopfield network is developed for nonlinear image reconstruction of electrical capacitance tomography (ECT). The (nonlinear) forward problem in ECT is solved using the MLFF-NN trained with a set of capacitance data from measurements based on a back-propagation training algorithm with regularization. The inverse problem is solved using an analogue Hopfield network based on a neural-network multi-criteria optimization image reconstruction technique (HN-MOIRT). The nonlinear image reconstruction based on this combined MLFF-NN + HN-MOIRT approach is tested on measured capacitance data not used in training to reconstruct the permittivity distribution. The performance of the technique is compared against commonly used linear Landweber and semi-linear image reconstruction techniques, showing superiority in terms of both stability and quality of reconstructed images.
Development of anatomically and dielectrically accurate breast phantoms for microwave imaging appl