Science.gov

Sample records for accurately reconstruct images

  1. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  2. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  3. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    PubMed Central

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  4. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  5. Accurate 3D reconstruction of complex blood vessel geometries from intravascular ultrasound images: in vitro study.

    PubMed

    Subramanian, K R; Thubrikar, M J; Fowler, B; Mostafavi, M T; Funk, M W

    2000-01-01

    We present a technique that accurately reconstructs complex three dimensional blood vessel geometry from 2D intravascular ultrasound (IVUS) images. Biplane x-ray fluoroscopy is used to image the ultrasound catheter tip at a few key points along its path as the catheter is pulled through the blood vessel. An interpolating spline describes the continuous catheter path. The IVUS images are located orthogonal to the path, resulting in a non-uniform structured scalar volume of echo densities. Isocontour surfaces are used to view the vessel geometry, while transparency and clipping enable interactive exploration of interior structures. The two geometries studied are a bovine artery vascular graft having U-shape and a constriction, and a canine carotid artery having multiple branches and a constriction. Accuracy of the reconstructions is established by comparing the reconstructions to (1) silicone moulds of the vessel interior, (2) biplane x-ray images, and (3) the original echo images. Excellent shape and geometry correspondence was observed in both geometries. Quantitative measurements made at key locations of the 3D reconstructions also were in good agreement with those made in silicone moulds. The proposed technique is easily adoptable in clinical practice, since it uses x-rays with minimal exposure and existing IVUS technology. PMID:11105284

  6. Reconstruction of applicator positions from multiple-view images for accurate superficial hyperthermia treatment planning

    NASA Astrophysics Data System (ADS)

    Drizdal, T.; Paulides, M. M.; Linthorst, M.; van Rhoon, G. C.

    2012-05-01

    In the current clinical practice, prior to superficial hyperthermia treatments (HT), temperature probes are placed in tissue to document a thermal dose. To investigate whether the painful procedure of catheter placement can be replaced by superficial HT planning, we study if the specific absorption rate (SAR) coverage is predictive for treatment outcome. An absolute requirement for such a study is the accurate reconstruction of the applicator setup. The purpose of this study was to investigate the feasibility of the applicator setup reconstruction from multiple-view images. The accuracy of the multiple-view reconstruction method has been assessed for two experimental setups using six lucite cone applicators (LCAs) representing the largest array applied at our clinic and also the most difficult scenario for the reconstruction. For the two experimental setups and 112 distances, the mean difference between photogrametry reconstructed and manually measured distances was 0.25 ± 0.79 mm (mean±1 standard deviation). By a parameter study of translation T (mm) and rotation R (°) of LCAs, we showed that these inaccuracies are clinically acceptable, i.e. they are either from ±1.02 mm error in translation or ±0.48° in rotation, or combinations expressed by 4.35R2 + 0.97T2 = 1. We anticipate that such small errors will not have a relevant influence on the SAR distribution in the treated region. The clinical applicability of the procedure is shown on a patient with a breast cancer recurrence treated with reirradiation plus superficial hyperthermia using the six-LCA array. The total reconstruction procedure of six LCAs from a set of ten photos currently takes around 1.5 h. We conclude that the reconstruction of superficial HT setup from multiple-view images is feasible and only minor errors are found that will have a negligible influence on treatment planning quality.

  7. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  8. In Situ Casting and Imaging of the Rat Airway Tree for Accurate 3D Reconstruction

    SciTech Connect

    Jacob, Rick E.; Colby, Sean M.; Kabilan, Senthil; Einstein, Daniel R.; Carson, James P.

    2013-08-01

    The use of anatomically accurate, animal-specific airway geometries is important for understanding and modeling the physiology of the respiratory system. One approach for acquiring detailed airway architecture is to create a bronchial cast of the conducting airways. However, typical casting procedures either do not faithfully preserve the in vivo branching angles, or produce rigid casts that when removed for imaging are fragile and thus easily damaged. We address these problems by creating an in situ bronchial cast of the conducting airways in rats that can be subsequently imaged in situ using 3D micro-CT imaging. We also demonstrate that deformations in airway branch angles resulting from the casting procedure are small, and that these angle deformations can be reversed through an interactive adjustment of the segmented cast geometry. Animal work was approved by the Institutional Animal Care and Use Committee of Pacific Northwest National Laboratory.

  9. In Situ Casting and Imaging of the Rat Airway Tree for Accurate 3D Reconstruction

    PubMed Central

    Jacob, Richard E.; Colby, Sean M.; Kabilan, Senthil; Einstein, Daniel R.; Carson, James P.

    2014-01-01

    The use of anatomically accurate, animal-specific airway geometries is important for understanding and modeling the physiology of the respiratory system. One approach for acquiring detailed airway architecture is to create a bronchial cast of the conducting airways. However, typical casting procedures either do not faithfully preserve the in vivo branching angles or produce rigid casts that when removed for imaging are fragile and thus easily damaged. We address these problems by creating an in situ bronchial cast of the conducting airways in rats that can be subsequently imaged in situ using 3D micro-CT imaging. We also demonstrate that deformations in airway branch angles resulting from the casting procedure are small, and that these angle deformations can be reversed through an interactive adjustment of the segmented cast geometry. Animal work was approved by the Institutional Animal Care and Use Committee of Pacific Northwest National Laboratory. PMID:23786464

  10. Integration of multi-modality imaging for accurate 3D reconstruction of human coronary arteries in vivo

    NASA Astrophysics Data System (ADS)

    Giannoglou, George D.; Chatzizisis, Yiannis S.; Sianos, George; Tsikaderis, Dimitrios; Matakos, Antonis; Koutkias, Vassilios; Diamantopoulos, Panagiotis; Maglaveras, Nicos; Parcharidis, George E.; Louridas, George E.

    2006-12-01

    In conventional intravascular ultrasound (IVUS)-based three-dimensional (3D) reconstruction of human coronary arteries, IVUS images are arranged linearly generating a straight vessel volume. However, with this approach real vessel curvature is neglected. To overcome this limitation an imaging method was developed based on integration of IVUS and biplane coronary angiography (BCA). In 17 coronary arteries from nine patients, IVUS and BCA were performed. From each angiographic projection, a single end-diastolic frame was selected and in each frame the IVUS catheter was interactively detected for the extraction of 3D catheter path. Ultrasound data was obtained with a sheath-based catheter and recorded on S-VHS videotape. S-VHS data was digitized and lumen and media-adventitia contours were semi-automatically detected in end-diastolic IVUS images. Each pair of contours was aligned perpendicularly to the catheter path and rotated in space by implementing an algorithm based on Frenet-Serret rules. Lumen and media-adventitia contours were interpolated through generation of intermediate contours creating a real 3D lumen and vessel volume, respectively. The absolute orientation of the reconstructed lumen was determined by back-projecting it onto both angiographic planes and comparing the projected lumen with the actual angiographic lumen. In conclusion, our method is capable of performing rapid and accurate 3D reconstruction of human coronary arteries in vivo. This technique can be utilized for reliable plaque morphometric, geometrical and hemodynamic analyses.

  11. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  12. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot.

    PubMed

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J

    2014-10-01

    Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design.

  13. Experimental study on the application of a compressed-sensing (CS) algorithm to dental cone-beam CT (CBCT) for accurate, low-dose image reconstruction

    NASA Astrophysics Data System (ADS)

    Oh, Jieun; Cho, Hyosung; Je, Uikyu; Lee, Minsik; Kim, Hyojeong; Hong, Daeki; Park, Yeonok; Lee, Seonhwa; Cho, Heemoon; Choi, Sungil; Koo, Yangseo

    2013-03-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient data. In computed tomography (CT); for example, image reconstruction from few views would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction method based on a compressed-sensing (CS) algorithm, which exploits the sparseness of the gradient image with substantially high accuracy, for accurate, low-dose dental cone-beam CT (CBCT) reconstruction. We applied the algorithm to a commercially-available dental CBCT system (Expert7™, Vatech Co., Korea) and performed experimental works to demonstrate the algorithm for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images from several undersampled data and evaluated the reconstruction quality in terms of the universal-quality index (UQI). Experimental demonstrations of the CS-based reconstruction algorithm appear to show that it can be applied to current dental CBCT systems for reducing imaging doses and improving the image quality.

  14. Analysis of algebraic reconstruction technique for accurate imaging of gas temperature and concentration based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He

    2016-06-01

    An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant

  15. Analysis of algebraic reconstruction technique for accurate imaging of gas temperature and concentration based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He

    2016-06-01

    An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant

  16. A binary image reconstruction technique for accurate determination of the shape and location of metal objects in x-ray computed tomography.

    PubMed

    Wang, Jing; Xing, Lei

    2010-01-01

    The presence of metals in patients causes streaking artifacts in X-ray CT and has been recognized as a problem that limits various applications of CT imaging. Accurate localization of metals in CT images is a critical step for metal artifacts reduction in CT imaging and many practical applications of CT images. The purpose of this work is to develop a method of auto-determination of the shape and location of metallic object(s) in the image space. The proposed method is based on the fact that when a metal object is present in a patient, a CT image can be divided into two prominent components: high density metal and low density normal tissues. This prior knowledge is incorporated into an objective function as the regularization term whose role is to encourage the solution to take a form of two intensity levels. A computer simulation study and four experimental studies are performed to evaluate the proposed approach. Both simulation and experimental studies show that the presented algorithm works well even in the presence of complicated shaped metal objects. For a hexagonally shaped metal embedded in a water phantom, for example, it is found that the accuracy of metal reconstruction is within sub-millimeter.

  17. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis

    PubMed Central

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  18. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  19. Overview of Image Reconstruction

    SciTech Connect

    Marr, R. B.

    1980-04-01

    Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on Rn is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references. (RWR)

  20. Structured image reconstruction for three-dimensional ghost imaging lidar.

    PubMed

    Yu, Hong; Li, Enrong; Gong, Wenlin; Han, Shensheng

    2015-06-01

    A structured image reconstruction method has been proposed to obtain high quality images in three-dimensional ghost imaging lidar. By considering the spatial structure relationship between recovered images of scene slices at different longitudinal distances, orthogonality constraint has been incorporated to reconstruct the three-dimensional scenes in remote sensing. Numerical simulations have been performed to demonstrate that scene slices with various sparse ratios can be recovered more accurately by applying orthogonality constraint, and the enhancement is significant especially for ghost imaging with less measurements. A simulated three-dimensional city scene has been successfully reconstructed by using structured image reconstruction in three-dimensional ghost imaging lidar. PMID:26072814

  1. The Value of Accurate Magnetic Resonance Characterization of Posterior Cruciate Ligament Tears in the Setting of Multiligament Knee Injury: Imaging Features Predictive of Early Repair vs Reconstruction.

    PubMed

    Goiney, Christoper C; Porrino, Jack; Twaddle, Bruce; Richardson, Michael L; Mulcahy, Hyojeong; Chew, Felix S

    2016-01-01

    Multiligament knee injury (MLKI) represents a complex set of pathologies treated with a wide variety of surgical approaches. If early surgical intervention is performed, the disrupted posterior cruciate ligament (PCL) can be treated with primary repair or reconstruction. The purpose of our study was to retrospectively identify a critical length of the distal component of the torn PCL on magnetic resonance imaging (MRI) that may predict the ability to perform early proximal femoral repair of the ligament, as opposed to reconstruction. A total of 50 MLKIs were managed at Harborview Medical Center from May 1, 2013, through July 15, 2014, by an orthopedic surgeon. Following exclusions, there were 27 knees with complete disruption of the PCL that underwent either early reattachment to the femoral insertion or reconstruction and were evaluated using preoperative MRI. In a consensus fashion, 2 radiologists measured the proximal and distal fragments of each disrupted PCL using preoperative MRI in multiple planes, as needed. MRI findings were correlated with what was performed at surgery. Those knees with a distal fragment PCL length of ≥41mm were capable of, and underwent, early proximal femoral repair. With repair, the distal stump was attached to the distal femur. Alternatively, those with a distal PCL length of ≤32mm could not undergo repair because of insufficient length and as such, were reconstructed. If early surgical intervention for an MLKI involving disruption of the PCL is considered, attention should be given to the length of the distal PCL fragment on MRI to plan appropriately for proximal femoral reattachment vs reconstruction. If the distal PCL fragment measures ≥41mm, surgical repair is achievable and can be considered as a surgical option.

  2. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  3. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  4. Spectral image reconstruction through the PCA transform

    NASA Astrophysics Data System (ADS)

    Ma, Long; Qiu, Xuewei; Cong, Yangming

    2015-12-01

    Digital color image reproduction based on spectral information has become a field of much interest and practical importance in recent years. The representation of color in digital form with multi-band images is not very accurate, hence the use of spectral image is justified. Reconstructing high-dimensional spectral reflectance images from relatively low-dimensional camera signals is generally an ill-posed problem. The aim of this study is to use the Principal component analysis (PCA) transform in spectral reflectance images reconstruction. The performance is evaluated by the mean, median and standard deviation of color difference values. The values of mean, median and standard deviation of root mean square (GFC) errors between the reconstructed and the actual spectral image were also calculated. Simulation experiments conducted on a six-channel camera system and on spectral test images show the performance of the suggested method.

  5. Exercises in PET Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Nix, Oliver

    These exercises are complementary to the theoretical lectures about positron emission tomography (PET) image reconstruction. They aim at providing some hands on experience in PET image reconstruction and focus on demonstrating the different data preprocessing steps and reconstruction algorithms needed to obtain high quality PET images. Normalisation, geometric-, attenuation- and scatter correction are introduced. To explain the necessity of those some basics about PET scanner hardware, data acquisition and organisation are reviewed. During the course the students use a software application based on the STIR (software for tomographic image reconstruction) library 1,2 which allows them to dynamically select or deselect corrections and reconstruction methods as well as to modify their most important parameters. Following the guided tutorial, the students get an impression on the effect the individual data precorrections have on image quality and what happens if they are forgotten. Several data sets in sinogram format are provided, such as line source data, Jaszczak phantom data sets with high and low statistics and NEMA whole body phantom data. The two most frequently used reconstruction algorithms in PET image reconstruction, filtered back projection (FBP) and the iterative OSEM (ordered subset expectation maximation) approach are used to reconstruct images. The exercise should help the students gaining an understanding what the reasons for inferior image quality and artefacts are and how to improve quality by a clever choice of reconstruction parameters.

  6. Crystallographic image reconstruction problem

    NASA Astrophysics Data System (ADS)

    ten Eyck, Lynn F.

    1993-11-01

    The crystallographic X-ray diffraction experiment gives the amplitudes of the Fourier series expansion of the electron density distribution within the crystal. The 'phase problem' in crystallography is the determination of the phase angles of the Fourier coefficients required to calculate the Fourier synthesis and reveal the molecular structure. The magnitude of this task varies enormously as the size of the structures ranges from a few atoms to thousands of atoms, and the number of Fourier coefficients ranges from hundreds to hundreds of thousands. The issue is further complicated for large structures by limited resolution. This problem is solved for 'small' molecules (up to 200 atoms and a few thousand Fourier coefficients) by methods based on probabilistic models which depend on atomic resolution. These methods generally fail for larger structures such as proteins. The phase problem for protein molecules is generally solved either by laborious experimental methods or by exploiting known similarities to solved structures. Various direct methods have been attempted for very large structures over the past 15 years, with gradually improving results -- but so far no complete success. This paper reviews the features of the crystallographic image reconstruction problem which render it recalcitrant, and describes recent encouraging progress in the application of maximum entropy methods to this problem.

  7. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  8. Automated End-to-End Workflow for Precise and Geo-accurate Reconstructions using Fiducial Markers

    NASA Astrophysics Data System (ADS)

    Rumpler, M.; Daftry, S.; Tscharf, A.; Prettenthaler, R.; Hoppe, C.; Mayer, G.; Bischof, H.

    2014-08-01

    Photogrammetric computer vision systems have been well established in many scientific and commercial fields during the last decades. Recent developments in image-based 3D reconstruction systems in conjunction with the availability of affordable high quality digital consumer grade cameras have resulted in an easy way of creating visually appealing 3D models. However, many of these methods require manual steps in the processing chain and for many photogrammetric applications such as mapping, recurrent topographic surveys or architectural and archaeological 3D documentations, high accuracy in a geo-coordinate system is required which often cannot be guaranteed. Hence, in this paper we present and advocate a fully automated end-to-end workflow for precise and geoaccurate 3D reconstructions using fiducial markers. We integrate an automatic camera calibration and georeferencing method into our image-based reconstruction pipeline based on binary-coded fiducial markers as artificial, individually identifiable landmarks in the scene. Additionally, we facilitate the use of these markers in conjunction with known ground control points (GCP) in the bundle adjustment, and use an online feedback method that allows assessment of the final reconstruction quality in terms of image overlap, ground sampling distance (GSD) and completeness, and thus provides flexibility to adopt the image acquisition strategy already during image recording. An extensive set of experiments is presented which demonstrate the accuracy benefits to obtain a highly accurate and geographically aligned reconstruction with an absolute point position uncertainty of about 1.5 times the ground sampling distance.

  9. Geometric reconstruction using tracked ultrasound strain imaging

    NASA Astrophysics Data System (ADS)

    Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.

    2013-03-01

    The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.

  10. Multi-contrast magnetic resonance image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Meng; Chen, Yunmei; Zhang, Hao; Huang, Feng

    2015-03-01

    In clinical exams, multi-contrast images from conventional MRI are scanned with the same field of view (FOV) for complementary diagnostic information, such as proton density- (PD-), T1- and T2-weighted images. Their sharable information can be utilized for more robust and accurate image reconstruction. In this work, we propose a novel model and an efficient algorithm for joint image reconstruction and coil sensitivity estimation in multi-contrast partially parallel imaging (PPI) in MRI. Our algorithm restores the multi-contrast images by minimizing an energy function consisting of an L2-norm fidelity term to reduce construction errors caused by motion, a regularization term of underlying images to preserve common anatomical features by using vectorial total variation (VTV) regularizer, and updating sensitivity maps by Tikhonov smoothness based on their physical property. We present the numerical results including T1- and T2-weighted MR images recovered from partially scanned k-space data and provide the comparisons between our results and those obtained from the related existing works. Our numerical results indicate that the proposed method using vectorial TV and penalties on sensitivities can be made promising and widely used for multi-contrast multi-channel MR image reconstruction.

  11. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    PubMed

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  12. Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes.

    PubMed

    Shen, Shuhan

    2013-05-01

    In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.

  13. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  14. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  15. Towards an accurate volume reconstruction in atom probe tomography.

    PubMed

    Beinke, Daniel; Oberdorfer, Christian; Schmitz, Guido

    2016-06-01

    An alternative concept for the reconstruction of atom probe data is outlined. It is based on the calculation of realistic trajectories of the evaporated ions in a recursive refinement process. To this end, the electrostatic problem is solved on a Delaunay tessellation. To enable the trajectory calculation, the order of reconstruction is inverted with respect to previous reconstruction schemes: the last atom detected is reconstructed first. In this way, the emitter shape, which controls the trajectory, can be defined throughout the duration of the reconstruction. A proof of concept is presented for 3D model tips, containing spherical precipitates or embedded layers of strongly contrasting evaporation thresholds. While the traditional method following Bas et al. generates serious distortions in these cases, a reconstruction with the proposed electrostatically informed approach improves the geometry of layers and particles significantly.

  16. IVUSAngio tool: a publicly available software for fast and accurate 3D reconstruction of coronary arteries.

    PubMed

    Doulaverakis, Charalampos; Tsampoulatidis, Ioannis; Antoniadis, Antonios P; Chatzizisis, Yiannis S; Giannopoulos, Andreas; Kompatsiaris, Ioannis; Giannoglou, George D

    2013-11-01

    There is an ongoing research and clinical interest in the development of reliable and easily accessible software for the 3D reconstruction of coronary arteries. In this work, we present the architecture and validation of IVUSAngio Tool, an application which performs fast and accurate 3D reconstruction of the coronary arteries by using intravascular ultrasound (IVUS) and biplane angiography data. The 3D reconstruction is based on the fusion of the detected arterial boundaries in IVUS images with the 3D IVUS catheter path derived from the biplane angiography. The IVUSAngio Tool suite integrates all the intermediate processing and computational steps and provides a user-friendly interface. It also offers additional functionality, such as automatic selection of the end-diastolic IVUS images, semi-automatic and automatic IVUS segmentation, vascular morphometric measurements, graphical visualization of the 3D model and export in a format compatible with other computer-aided design applications. Our software was applied and validated in 31 human coronary arteries yielding quite promising results. Collectively, the use of IVUSAngio Tool significantly reduces the total processing time for 3D coronary reconstruction. IVUSAngio Tool is distributed as free software, publicly available to download and use.

  17. Fast iterative image reconstruction of 3D PET data

    SciTech Connect

    Kinahan, P.E.; Townsend, D.W.; Michel, C.

    1996-12-31

    For count-limited PET imaging protocols, two different approaches to reducing statistical noise are volume, or 3D, imaging to increase sensitivity, and statistical reconstruction methods to reduce noise propagation. These two approaches have largely been developed independently, likely due to the perception of the large computational demands of iterative 3D reconstruction methods. We present results of combining the sensitivity of 3D PET imaging with the noise reduction and reconstruction speed of 2D iterative image reconstruction methods. This combination is made possible by using the recently-developed Fourier rebinning technique (FORE), which accurately and noiselessly rebins 3D PET data into a 2D data set. The resulting 2D sinograms are then reconstructed independently by the ordered-subset EM (OSEM) iterative reconstruction method, although any other 2D reconstruction algorithm could be used. We demonstrate significant improvements in image quality for whole-body 3D PET scans by using the FORE+OSEM approach compared with the standard 3D Reprojection (3DRP) algorithm. In addition, the FORE+OSEM approach involves only 2D reconstruction and it therefore requires considerably less reconstruction time than the 3DRP algorithm, or any fully 3D statistical reconstruction algorithm.

  18. Light Field Imaging Based Accurate Image Specular Highlight Removal.

    PubMed

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into "unsaturated" and "saturated" category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  19. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  20. Maximum entropy image reconstruction from projections

    NASA Astrophysics Data System (ADS)

    Bara, N.; Murata, K.

    1981-07-01

    The maximum entropy method is applied to image reconstruction from projections, of which angular view is restricted. The relaxation parameters are introduced to the maximum entropy reconstruction and after iteration the median filtering is implemented. These procedures improve the quality of the reconstructed image from noisy projections

  1. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  2. Synergistic image reconstruction for hybrid ultrasound and photoacoustic computed tomography

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Wang, Kun; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  3. Method for position emission mammography image reconstruction

    DOEpatents

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  4. Super-resolution image reconstruction using diffuse source models.

    PubMed

    Ellis, Michael A; Viola, Francesco; Walker, William F

    2010-06-01

    Image reconstruction is central to many scientific fields, from medical ultrasound and sonar to computed tomography and computer vision. Although lenses play a critical reconstruction role in these fields, digital sensors enable more sophisticated computational approaches. A variety of computational methods have thus been developed, with the common goal of increasing contrast and resolution to extract the greatest possible information from raw data. This paper describes a new image reconstruction method named the Diffuse Time-domain Optimized Near-field Estimator (dTONE). dTONE represents each hypothetical target in the system model as a diffuse region of targets rather than a single discrete target, which more accurately represents the experimental data that arise from signal sources in continuous space, with no additional computational requirements at the time of image reconstruction. Simulation and experimental ultrasound images of animal tissues show that dTONE achieves image resolution and contrast far superior to those of conventional image reconstruction methods. We also demonstrate the increased robustness of the diffuse target model to major sources of image degradation through the addition of electronic noise, phase aberration and magnitude aberration to ultrasound simulations. Using experimental ultrasound data from a tissue-mimicking phantom containing a 3-mm-diameter anechoic cyst, the conventionally reconstructed image has a cystic contrast of -6.3 dB, whereas the dTONE image has a cystic contrast of -14.4 dB.

  5. Reconstructing HST Images of Asteroids

    NASA Astrophysics Data System (ADS)

    Storrs, A. D.; Bank, S.; Gerhardt, H.; Makhoul, K.

    2003-12-01

    We present reconstructions of images of 22 large main belt asteroids that were observed by Hubble Space Telescope with the Wide-Field/Planetary cameras. All images were restored with the MISTRAL program (Mugnier, Fusco, and Conan 2003) at enhanced spatial resolution. This is possible thanks to the well-studied and stable point spread function (PSF) on HST. We present some modeling of this process and determine that the Strehl ratio for WF/PC (aberrated) images can be improved to 130 ratio of 80 We will report sizes, shapes, and albedos for these objects, as well as any surface features. Images taken with the WFPC-2 instrument were made in a variety of filters so that it should be possible to investigate changes in mineralogy across the surface of the larger asteroids in a manner similar to that done on 4 Vesta by Binzel et al. (1997). Of particular interest are a possible water of hydration feature on 1 Ceres, and the non-observation of a constriction or gap between the components of 216 Kleopatra. Reduction of this data was aided by grant HST-GO-08583.08A from the Space Telescope Science Institute. References: Mugnier, L.M., T. Fusco, and J.-M. Conan, 2003. JOSA A (submitted) Binzel, R.P., Gaffey, M.J., Thomas, P.C., Zellner, B.H., Storrs, A.D., and Wells, E.N. 1997. Icarus 128 pp. 95-103

  6. Correlation-Based Image Reconstruction Methods for Magnetic Particle Imaging

    NASA Astrophysics Data System (ADS)

    Ishihara, Yasutoshi; Kuwabara, Tsuyoshi; Honma, Takumi; Nakagawa, Yohei

    Magnetic particle imaging (MPI), in which the nonlinear interaction between internally administered magnetic nanoparticles (MNPs) and electromagnetic waves irradiated from outside of the body is utilized, has attracted attention for its potential to achieve early diagnosis of diseases such as cancer. In MPI, the local magnetic field distribution is scanned, and the magnetization signal from MNPs within a selected region is detected. However, the signal sensitivity and image resolution are degraded by interference from magnetization signals generated by MNPs outside of the selected region, mainly because of imperfections (limited gradients) in the local magnetic field distribution. Here, we propose new methods based on correlation information between the observed signal and the system function—defined as the interaction between the magnetic field distribution and the magnetizing properties of MNPs. We performed numerical analyses and found that, although the images were somewhat blurred, image artifacts could be significantly reduced and accurate images could be reconstructed without the inverse-matrix operation used in conventional image reconstruction methods.

  7. Image reconstruction via truncated lambda tomography

    NASA Astrophysics Data System (ADS)

    Yu, Hengyong; Ye, Yangbo; Wang, Ge

    2006-08-01

    This paper investigates the feasibility of reconstructing a Computed Tomography (CT) image from truncated Lambda Tomography (LT), a gradient-like image of it's original. An LT image can be regarded as a convolution of the object image and the point spread function (PSF) of the Calderon operator. The PSF's infinite support provides the LT image infinite support; even the original CT image is of compact support. When the support of a truncated LT image fully covers the compact support of the corresponding CT image, we develop an extrapolation method to recover the CT image more precisely. When the support of the CT image fully covers the support of the truncated LT image, we design a template-based scheme to compensate the cupping effects and reconstruct a satisfactory image. Our algorithms are evaluated in numerical simulations and the results demonstrate the feasibilities of our methods. Our approaches provide a new way to reconstruct high-quality CT images.

  8. Heuristic reconstructions of neutron penumbral images

    SciTech Connect

    Nozaki, Shinya; Chen Yenwei

    2004-10-01

    Penumbral imaging is a technique of coded aperture imaging proposed for imaging of highly penetrating radiations. To date, the penumbral imaging technique has been successfully applied to neutron imaging in laser fusion experiments. Since the reconstruction of penumbral images is based on linear deconvolution methods, such as inverse filter and Wiener filer, the point spread function of apertures should be space invariant; it is also sensitive to the noise contained in penumbral images. In this article, we propose a new heuristic reconstruction method for neutron penumbral imaging, which can be used for a space-variant imaging system and is also very tolerant to the noise.

  9. Prospective regularization design in prior-image-based reconstruction.

    PubMed

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-12-21

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  10. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  11. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  12. Accurate identification of paraprotein antigen targets by epitope reconstruction

    PubMed Central

    Sompuram, Seshi R.; Bastas, Gerassimos; Vani, Kodela

    2008-01-01

    We describe the first successful clinical application of a new discovery technology, epitope-mediated antigen prediction (E-MAP), to the investigation of multiple myeloma. Until now, there has been no reliable, systematic method to identify the cognate antigens of paraproteins. E-MAP is a variation of previous efforts to reconstruct the epitopes of paraproteins, with the significant difference that it provides enough epitope sequence data so as to enable successful protein database searches. We first reconstruct the paraprotein's epitope by analyzing the peptides that strongly bind. Then, we compile the data and interrogate the nonredundant protein database, searching for a close match. As a clinical proof-of-concept, we apply this technology to uncovering the protein targets of para-proteins in multiple myeloma (MM). E-MAP analysis of 2 MM paraproteins identified human cytomegalovirus (HCMV) as a target in both. E-MAP sequence analysis determined that one para-protein binds to the AD-2S1 epitope of HCMV glycoprotein B. The other binds to the amino terminus of the HCMV UL-48 gene product. We confirmed these predictions using immunoassays and immunoblot analyses. E-MAP represents a new investigative tool for analyzing the role of chronic antigenic stimulation in B-lymphoproliferative disorders. PMID:17878398

  13. Image reconstruction in transcranial photoacoustic computed tomography of the brain

    NASA Astrophysics Data System (ADS)

    Mitsuhashi, Kenji; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Photoacoustic computed tomography (PACT) holds great promise for transcranial brain imaging. However, the strong reflection, scattering, attenuation, and mode-conversion of photoacoustic waves in the skull pose serious challenges to establishing the method. The lack of an appropriate model of solid media in conventional PACT imaging models, which are based on the canonical scalar wave equation, causes a significant model mismatch in the presence of the skull and thus results in deteriorated reconstructed images. The goal of this study was to develop an image reconstruction algorithm that accurately models the skull and thereby ameliorates the quality of reconstructed images. The propagation of photoacoustic waves through the skull was modeled by a viscoelastic stress tensor wave equation, which was subsequently discretized by use of a staggered grid fourth-order finite-difference time-domain (FDTD) method. The matched adjoint of the FDTD-based wave propagation operator was derived for implementing a back-projection operator. Systematic computer simulations were conducted to demonstrate the effectiveness of the back-projection operator for reconstructing images in a realistic three-dimensional PACT brain imaging system. The results suggest that the proposed algorithm can successfully reconstruct images from transcranially-measured pressure data and readily be translated to clinical PACT brain imaging applications.

  14. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  15. Patient-Specific Orbital Implants: Development and Implementation of Technology for More Accurate Orbital Reconstruction.

    PubMed

    Podolsky, Dale J; Mainprize, James G; Edwards, Glenn P; Antonyshyn, Oleh M

    2016-01-01

    Fracture of the orbital floor is commonly seen in facial trauma. Accurate anatomical reconstruction of the orbital floor contour is challenging. The authors demonstrate a novel method to more precisely reconstruct the orbital floor on a 50-year-old female who sustained an orbital floor fracture following a fall. Results of the reconstruction show excellent reapproximation of the native orbital floor contour and complete resolution of her enopthalmos and facial asymmetry. PMID:26674886

  16. Optimal reconstruction of images from localized phase.

    PubMed

    Urieli, S; Porat, M; Cohen, N

    1998-01-01

    The importance of localized phase in signal representation is investigated. The convergence rate of the POCS algorithm (projection onto convex sets) used for image reconstruction from spectral phase is defined and analyzed, and the characteristics of images optimally reconstructed from phase-only information are presented. It is concluded that images of geometric form are most efficiently reconstructed from their spectral phase, whereas images of symmetric form have the poorest convergence characteristics. The transition between the two extremes is shown to be continuous. The results provide a new approach and analysis of the previously reported advantages of the localized phase representation over the global approach, and suggest possible compression schemes.

  17. Accurate statistical tests for smooth classification images.

    PubMed

    Chauvin, Alan; Worsley, Keith J; Schyns, Philippe G; Arguin, Martin; Gosselin, Frédéric

    2005-10-05

    Despite an obvious demand for a variety of statistical tests adapted to classification images, few have been proposed. We argue that two statistical tests based on random field theory (RFT) satisfy this need for smooth classification images. We illustrate these tests on classification images representative of the literature from F. Gosselin and P. G. Schyns (2001) and from A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett (2004). The necessary computations are performed using the Stat4Ci Matlab toolbox.

  18. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  19. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  20. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  1. Image reconstruction for robot assisted ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Aalamifar, Fereshteh; Zhang, Haichong K.; Rahmim, Arman; Boctor, Emad M.

    2016-04-01

    An investigation of several image reconstruction methods for robot-assisted ultrasound (US) tomography setup is presented. In the robot-assisted setup, an expert moves the US probe to the location of interest, and a robotic arm automatically aligns another US probe with it. The two aligned probes can then transmit and receive US signals which are subsequently used for tomographic reconstruction. This study focuses on reconstruction of the speed of sound. In various simulation evaluations as well as in an experiment with a millimeter-range inaccuracy, we demonstrate that the limited data provided by two probes can be used to reconstruct pixel-wise images differentiating between media with different speeds of sound. Combining the results of this investigation with the developed robot-assisted US tomography setup, we envision feasibility of this setup for tomographic imaging in applications beyond breast imaging, with potentially significant efficacy in cancer diagnosis.

  2. Image Reconstruction for Prostate Specific Nuclear Medicine imagers

    SciTech Connect

    Mark Smith

    2007-01-11

    There is increasing interest in the design and construction of nuclear medicine detectors for dedicated prostate imaging. These include detectors designed for imaging the biodistribution of radiopharmaceuticals labeled with single gamma as well as positron-emitting radionuclides. New detectors and acquisition geometries present challenges and opportunities for image reconstruction. In this contribution various strategies for image reconstruction for these special purpose imagers are reviewed. Iterative statistical algorithms provide a framework for reconstructing prostate images from a wide variety of detectors and acquisition geometries for PET and SPECT. The key to their success is modeling the physics of photon transport and data acquisition and the Poisson statistics of nuclear decay. Analytic image reconstruction methods can be fast and are useful for favorable acquisition geometries. Future perspectives on algorithm development and data analysis for prostate imaging are presented.

  3. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and

  4. Bayesian image reconstruction: Application to emission tomography

    SciTech Connect

    Nunez, J.; Llacer, J.

    1989-02-01

    In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.

  5. Three-dimensional image reconstruction for electrical impedance tomography.

    PubMed

    Kleinermann, F; Avis, N J; Judah, S K; Barber, D C

    1996-11-01

    Very little work has been conducted on three-dimensional aspects of electrical impedance tomography (EIT), partly due to the increased computational complexity over the two-dimensional aspects of EIT. Nevertheless, extending EIT to three-dimensional data acquisition and image reconstruction may afford significant advantages such as an increase in the size of the independent data set and improved spatial resolution. However, considerable challenges are associated with the software aspects of three-dimensional EIT systems due to the requirement for accurate three-dimensional forward problem modelling and the derivation of three-dimensional image reconstruction algorithms. This paper outlines the work performed to date to derive a three-dimensional image reconstruction algorithm for EIT based on the inversion of the sensitivity matrix approach for a finite right circular cylinder. A comparison in terms of the singular-value spectra and the singular vectors between the sensitivity matrices for a three-dimensional cylinder and a two-dimensional disc has been performed. This comparison shows that the three-dimensional image reconstruction algorithm recruits more central information at lower condition numbers than the two-dimensional image reconstruction algorithm.

  6. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  7. Weighted iterative reconstruction for magnetic particle imaging

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Rahmer, J.; Sattel, T. F.; Biederer, S.; Weizenecker, J.; Gleich, B.; Borgert, J.; Buzug, T. M.

    2010-03-01

    Magnetic particle imaging (MPI) is a new imaging technique capable of imaging the distribution of superparamagnetic particles at high spatial and temporal resolution. For the reconstruction of the particle distribution, a system of linear equations has to be solved. The mathematical solution to this linear system can be obtained using a least-squares approach. In this paper, it is shown that the quality of the least-squares solution can be improved by incorporating a weighting matrix using the reciprocal of the matrix-row energy as weights. A further benefit of this weighting is that iterative algorithms, such as the conjugate gradient method, converge rapidly yielding the same image quality as obtained by singular value decomposition in only a few iterations. Thus, the weighting strategy in combination with the conjugate gradient method improves the image quality and substantially shortens the reconstruction time. The performance of weighting strategy and reconstruction algorithms is assessed with experimental data of a 2D MPI scanner.

  8. Evolutionary approach to image reconstruction from projections

    NASA Astrophysics Data System (ADS)

    Nakao, Zensho; Ali, Fathelalem F.; Takashibu, Midori; Chen, Yen-Wei

    1997-10-01

    We present an evolutionary approach for reconstructing CT images; the algorithm reconstructs two-dimensional unknown images from four one-dimensional projections. A genetic algorithm works on a randomly generated population of strings each of which contains encodings of an image. The traditional, as well as new, genetic operators are applied on each generation. The mean square error between the projection data of the image encoded into a string and original projection data is used to estimate the string fitness. A Laplacian constraint term is included in the fitness function of the genetic algorithm for handling smooth images. Two new modified versions of the original genetic algorithm are presented. Results obtained by the original algorithm and the modified versions are compared to those obtained by the well-known algebraic reconstruction technique (ART), and it was found that the evolutionary method is more effective than ART in the particular case of limiting projection directions to four.

  9. Multiresolution reconstruction method to optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Patrickeyev, Igor; Oraevsky, Alexander A.

    2003-06-01

    A new method for reconstruction of optoacoustic images is proposed. The method of image reconstruction incorporates multiresolution wavelet filtering into spherical back-projection algorithm. According to our method, each optoacoustic signal detected with an array of ultrawide-band transducers is decomposed into a set of self-similar wavelets with different resolution (characteristic frequency) and then back-projected along the spherical traces for each resolution scale separately. The advantage of this approach is that one can reconstruct objects of a preferred size or a range of sizes. The sum of all images reconstructed with different resolutions yields an image that visualizes small and large objects at once. An approximate speed of the proposed algorithm is of the same order as algorithm, based on the Fast Fourier Transform (FFT). The accuracy of the proposed method is illustrated by images, which are reconstructed from simulated optoacoustic signals as well as signals measured with the Laser Optoacoustic Imaging System (LOIS) from a loop of blood vessel embedded in a gel phantom. The method can be used for contrast-enhanced optoacoustic imaging in the depth of tissue, i.e. for medical applications such as breast cancer or prostate cancer detection.

  10. Heuristic optimization in penumbral image for high resolution reconstructed image

    SciTech Connect

    Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.

    2010-10-15

    Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.

  11. Computational acceleration for MR image reconstruction in partially parallel imaging.

    PubMed

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images. PMID:20833599

  12. BIOFILM IMAGE RECONSTRUCTION FOR ASSESSING STRUCTURAL PARAMETERS

    PubMed Central

    Renslow, Ryan; Lewandowski, Zbigniew; Beyenal, Haluk

    2011-01-01

    The structure of biofilms can be numerically quantified from microscopy images using structural parameters. These parameters are used in biofilm image analysis to compare biofilms, to monitor temporal variation in biofilm structure, to quantify the effects of antibiotics on biofilm structure and to determine the effects of environmental conditions on biofilm structure. It is often hypothesized that biofilms with similar structural parameter values will have similar structures; however, this hypothesis has never been tested. The main goal was to test the hypothesis that the commonly used structural parameters can characterize the differences or similarities between biofilm structures. To achieve this goal 1) biofilm image reconstruction was developed as a new tool for assessing structural parameters, 2) independent reconstructions using the same starting structural parameters were tested to see how they differed from each other, 3) the effect of the original image parameter values on reconstruction success was evaluated and 4) the effect of the number and type of the parameters on reconstruction success was evaluated. It was found that two biofilms characterized by identical commonly used structural parameter values may look different, that the number and size of clusters in the original biofilm image affect image reconstruction success and that, in general, a small set of arbitrarily selected parameters may not reveal relevant differences between biofilm structures. PMID:21280029

  13. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  14. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2011-10-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:21742542

  15. Efficient MR image reconstruction for compressed MR imaging.

    PubMed

    Huang, Junzhou; Zhang, Shaoting; Metaxas, Dimitris

    2010-01-01

    In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction. PMID:20879224

  16. Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images

    PubMed Central

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539

  17. Building facade reconstruction by fusing terrestrial laser points and images.

    PubMed

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed.

  18. Compressed hyperspectral image sensing with joint sparsity reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Li, Yunsong; Zhang, Jing; Song, Juan; Lv, Pei

    2011-10-01

    Recent compressed sensing (CS) results show that it is possible to accurately reconstruct images from a small number of linear measurements via convex optimization techniques. In this paper, according to the correlation analysis of linear measurements for hyperspectral images, a joint sparsity reconstruction algorithm based on interband prediction and joint optimization is proposed. In the method, linear prediction is first applied to remove the correlations among successive spectral band measurement vectors. The obtained residual measurement vectors are then recovered using the proposed joint optimization based POCS (projections onto convex sets) algorithm with the steepest descent method. In addition, a pixel-guided stopping criterion is introduced to stop the iteration. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.

  19. PET Image Reconstruction Using Kernel Method

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249

  20. Image reconstruction algorithms with wavelet filtering for optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gawali, S.; Leggio, L.; Broadway, C.; González, P.; Sánchez, M.; Rodríguez, S.; Lamela, H.

    2016-03-01

    Optoacoustic imaging (OAI) is a hybrid biomedical imaging modality based on the generation and detection of ultrasound by illuminating the target tissue by laser light. Typically, laser light in visible or near infrared spectrum is used as an excitation source. OAI is based on the implementation of image reconstruction algorithms using the spatial distribution of optical absorption in tissues. In this work, we apply a time-domain back-projection (BP) reconstruction algorithm and a wavelet filtering for point and line detection, respectively. A comparative study between point detection and integrated line detection has been carried out by evaluating their effects on the image reconstructed. Our results demonstrate that the back-projection algorithm proposed is efficient for reconstructing high-resolution images of absorbing spheres embedded in a non-absorbing medium when it is combined with the wavelet filtering.

  1. Penalized maximum-likelihood image reconstruction for lesion detection

    NASA Astrophysics Data System (ADS)

    Qi, Jinyi; Huesman, Ronald H.

    2006-08-01

    Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-likelihood image reconstruction for this important clinical task. Compared to analytical reconstruction methods, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-likelihood image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.

  2. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  3. Coronary x-ray angiographic reconstruction and image orientation

    SciTech Connect

    Sprague, Kevin; Drangova, Maria; Lehmann, Glen

    2006-03-15

    We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel centerlines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.

  4. Coronary x-ray angiographic reconstruction and image orientation.

    PubMed

    Sprague, Kevin; Drangova, Maria; Lehmann, Glen; Slomka, Piotr; Levin, David; Chow, Benjamin; deKemp, Robert

    2006-03-01

    We have developed an interactive geometric method for 3D reconstruction of the coronary arteries using multiple single-plane angiographic views with arbitrary orientations. Epipolar planes and epipolar lines are employed to trace corresponding vessel segments on these views. These points are utilized to reconstruct 3D vessel centerlines. The accuracy of the reconstruction is assessed using: (1) near-intersection distances of the rays that connect x-ray sources with projected points, (2) distances between traced and projected centerlines. These same two measures enter into a fitness function for a genetic search algorithm (GA) employed to orient the angiographic image planes automatically in 3D avoiding local minima in the search for optimized parameters. Furthermore, the GA utilizes traced vessel shapes (as opposed to isolated anchor points) to assist the optimization process. Differences between two-view and multiview reconstructions are evaluated. Vessel radii are measured and used to render the coronary tree in 3D as a surface. Reconstruction fidelity is demonstrated via (1) virtual phantom, (2) real phantom, and (3) patient data sets, the latter two of which utilize the GA. These simulated and measured angiograms illustrate that the vessel center-lines are reconstructed in 3D with accuracy below 1 mm. The reconstruction method is thus accurate compared to typical vessel dimensions of 1-3 mm. The methods presented should enable a combined interpretation of the severity of coronary artery stenoses and the hemodynamic impact on myocardial perfusion in patients with coronary artery disease.

  5. Analysis and accurate reconstruction of incomplete data in X-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Tan, Renbo; Chen, Liyuan

    2014-01-01

    X-ray differential phase-contrast computed tomography (DPC-CT) is a powerful physical and biochemical analysis tool. In practical applications, there are often challenges for DPC-CT due to insufficient data caused by few-view, bad or missing detector channels, or limited scanning angular range. They occur quite frequently because of experimental constraints from imaging hardware, scanning geometry, and the exposure dose delivered to living specimens. In this work, we analyze the influence of incomplete data on DPC-CT image reconstruction. Then, a reconstruction method is developed and investigated for incomplete data DPC-CT. It is based on an algebraic iteration reconstruction technique, which minimizes the image total variation and permits accurate tomographic imaging with less data. This work comprises a numerical study of the method and its experimental verification using a dataset measured at the W2 beamline of the storage ring DORIS III equipped with a Talbot-Lau interferometer. The numerical and experimental results demonstrate that the presented method can handle incomplete data. It will be of interest for a wide range of DPC-CT applications in medicine, biology, and nondestructive testing.

  6. Joint model of motion and anatomy for PET image reconstruction

    SciTech Connect

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-12-15

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem.

  7. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  8. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  9. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  10. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  11. Image reconstructions with the rotating RF coil

    NASA Astrophysics Data System (ADS)

    Trakic, A.; Wang, H.; Weber, E.; Li, B. K.; Poole, M.; Liu, F.; Crozier, S.

    2009-12-01

    Recent studies have shown that rotating a single RF transceive coil (RRFC) provides a uniform coverage of the object and brings a number of hardware advantages (i.e. requires only one RF channel, averts coil-coil coupling interactions and facilitates large-scale multi-nuclear imaging). Motion of the RF coil sensitivity profile however violates the standard Fourier Transform definition of a time-invariant signal, and the images reconstructed in this conventional manner can be degraded by ghosting artifacts. To overcome this problem, this paper presents Time Division Multiplexed — Sensitivity Encoding (TDM-SENSE), as a new image reconstruction scheme that exploits the rotation of the RF coil sensitivity profile to facilitate ghost-free image reconstructions and reductions in image acquisition time. A transceive RRFC system for head imaging at 2 Tesla was constructed and applied in a number of in vivo experiments. In this initial study, alias-free head images were obtained in half the usual scan time. It is hoped that new sequences and methods will be developed by taking advantage of coil motion.

  12. Iterative image reconstruction in spectral CT

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Michel, Eric; Kim, Hye S.; Kim, Jae G.; Han, Byung H.; Cho, Min H.; Lee, Soo Y.

    2012-03-01

    Scan time of spectral-CTs is much longer than conventional CTs due to limited number of x-ray photons detectable by photon-counting detectors. However, the spectral pixel information in spectral-CT has much richer information on physiological and pathological status of the tissues than the CT-number in conventional CT, which makes the spectral- CT one of the promising future imaging modalities. One simple way to reduce the scan time in spectral-CT imaging is to reduce the number of views in the acquisition of projection data. But, this may result in poorer SNR and strong streak artifacts which can severely compromise the image quality. In this work, spectral-CT projection data were obtained from a lab-built spectral-CT consisting of a single CdTe photon counting detector, a micro-focus x-ray tube and scan mechanics. For the image reconstruction, we used two iterative image reconstruction methods, the simultaneous iterative reconstruction technique (SIRT) and the total variation minimization based on conjugate gradient method (CG-TV), along with the filtered back-projection (FBP) to compare the image quality. From the imaging of the iodine containing phantoms, we have observed that SIRT and CG-TV are superior to the FBP method in terms of SNR and streak artifacts.

  13. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  14. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET

    NASA Astrophysics Data System (ADS)

    Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.

    2016-05-01

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.

  15. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET.

    PubMed

    Goorden, Marlies C; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J

    2016-05-21

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning (99m)Tc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes ('multiple-pinhole paths' (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging (18)F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport. PMID:27082049

  16. Stochastic image reconstruction for a dual-particle imaging system

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.

    2016-02-01

    Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.

  17. Integrated Image Reconstruction and Gradient Nonlinearity Correction

    PubMed Central

    Tao, Shengzhen; Trzasko, Joshua D.; Shu, Yunhong; Huston, John; Bernstein, Matt A.

    2014-01-01

    Purpose To describe a model-based reconstruction strategy for routine magnetic resonance imaging (MRI) that accounts for gradient nonlinearity (GNL) during rather than after transformation to the image domain, and demonstrate that this approach reduces the spatial resolution loss that occurs during strictly image-domain GNL-correction. Methods After reviewing conventional GNL-correction methods, we propose a generic signal model for GNL-affected MRI acquisitions, discuss how it incorporates into contemporary image reconstruction platforms, and describe efficient non-uniform fast Fourier transform (NUFFT)-based computational routines for these. The impact of GNL-correction on spatial resolution by the conventional and proposed approaches is investigated on phantom data acquired at varying offsets from gradient isocenter, as well as on fully-sampled and (retrospectively) undersampled in vivo acquisitions. Results Phantom results demonstrate that resolution loss that occurs during GNL-correction is significantly less for the proposed strategy than for the standard approach at distances >10 cm from isocenter with a 35 cm FOV gradient coil. The in vivo results suggest that the proposed strategy better preserves fine anatomical detail than retrospective GNL-correction while offering comparable geometric correction. Conclusion Accounting for GNL during image reconstruction allows geometric distortion to be corrected with less spatial resolution loss than is typically observed with the conventional image domain correction strategy. PMID:25298258

  18. Optimal Discretization Resolution in Algebraic Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Sharif, Behzad; Kamalabadi, Farzad

    2005-11-01

    In this paper, we focus on data-limited tomographic imaging problems where the underlying linear inverse problem is ill-posed. A typical regularized reconstruction algorithm uses algebraic formulation with a predetermined discretization resolution. If the selected resolution is too low, we may loose useful details of the underlying image and if it is too high, the reconstruction will be unstable and the representation will fit irrelevant features. In this work, two approaches are introduced to address this issue. The first approach is using Mallow's CL method or generalized cross-validation. For each of the two methods, a joint estimator of regularization parameter and discretization resolution is proposed and their asymptotic optimality is investigated. The second approach is a Bayesian estimator of the model order using a complexity-penalizing prior. Numerical experiments focus on a space imaging application from a set of limited-angle tomographic observations.

  19. Mirror Surface Reconstruction from a Single Image.

    PubMed

    Liu, Miaomiao; Hartley, Richard; Salzmann, Mathieu

    2015-04-01

    This paper tackles the problem of reconstructing the shape of a smooth mirror surface from a single image. In particular, we consider the case where the camera is observing the reflection of a static reference target in the unknown mirror. We first study the reconstruction problem given dense correspondences between 3D points on the reference target and image locations. In such conditions, our differential geometry analysis provides a theoretical proof that the shape of the mirror surface can be recovered if the pose of the reference target is known. We then relax our assumptions by considering the case where only sparse correspondences are available. In this scenario, we formulate reconstruction as an optimization problem, which can be solved using a nonlinear least-squares method. We demonstrate the effectiveness of our method on both synthetic and real images. We then provide a theoretical analysis of the potential degenerate cases with and without prior knowledge of the pose of the reference target. Finally we show that our theory can be similarly applied to the reconstruction of the surface of transparent object.

  20. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  1. Speckle image reconstruction of the adaptive optics solar images.

    PubMed

    Zhong, Libo; Tian, Yu; Rao, Changhui

    2014-11-17

    Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.

  2. Improved least squares MR image reconstruction using estimates of k-space data consistency.

    PubMed

    Johnson, Kevin M; Block, Walter F; Reeder, Scott B; Samsonov, Alexey

    2012-06-01

    This study describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least squares-based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo and self-gated respiratory gating applications was evaluated in simulations, phantom experiments, and in vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches.

  3. Improved Least Squares MR Image Reconstruction Using Estimates of k-Space Data Consistency

    PubMed Central

    Johnson, Kevin M.; Block, Walter F.; Reeder, Scott. B.; Samsonov, Alexey

    2011-01-01

    This work describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least-squares based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo (FSE) and self gated respiratory gating applications was evaluated in simulations, phantom experiments, and in-vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches. PMID:22135155

  4. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  5. Propagation phasor approach for holographic image reconstruction

    PubMed Central

    Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan

    2016-01-01

    To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671

  6. Propagation phasor approach for holographic image reconstruction

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan

    2016-03-01

    To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears.

  7. Context dependent anti-aliasing image reconstruction

    NASA Technical Reports Server (NTRS)

    Beaudet, Paul R.; Hunt, A.; Arlia, N.

    1989-01-01

    Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.

  8. A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures.

    PubMed

    Mangipudi, K R; Radisch, V; Holzer, L; Volkert, C A

    2016-04-01

    We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. PMID:26906523

  9. Three-dimensional volumetric object reconstruction using computational integral imaging.

    PubMed

    Hong, Seung-Hyun; Jang, Ju-Seog; Javidi, Bahram

    2004-02-01

    We propose a three-dimensional (3D) imaging technique that can sense a 3D scene and computationally reconstruct it as a 3D volumetric image. Sensing of the 3D scene is carried out by obtaining elemental images optically using a pickup microlens array and a detector array. Reconstruction of volume pixels of the scene is accomplished by computationally simulating optical reconstruction according to ray optics. The entire pixels of the recorded elemental images contribute to volumetric reconstruction of the 3D scene. Image display planes at arbitrary distances from the display microlens array are computed and reconstructed by back propagating the elemental images through a computer synthesized pinhole array based on ray optics. We present experimental results of 3D image sensing and volume pixel reconstruction to test and verify the performance of the algorithm and the imaging system. The volume pixel values can be used for 3D image surface reconstruction.

  10. Reconstructing accurate ToF-SIMS depth profiles for organic materials with differential sputter rates

    PubMed Central

    Taylor, Adam J.; Graham, Daniel J.; Castner, David G.

    2015-01-01

    To properly process and reconstruct 3D ToF-SIMS data from systems such as multi-component polymers, drug delivery scaffolds, cells and tissues, it is important to understand the sputtering behavior of the sample. Modern cluster sources enable efficient and stable sputtering of many organics materials. However, not all materials sputter at the same rate and few studies have explored how different sputter rates may distort reconstructed depth profiles of multicomponent materials. In this study spun-cast bilayer polymer films of polystyrene and PMMA are used as model systems to optimize methods for the reconstruction of depth profiles in systems exhibiting different sputter rates between components. Transforming the bilayer depth profile from sputter time to depth using a single sputter rate fails to account for sputter rate variations during the profile. This leads to inaccurate apparent layer thicknesses and interfacial positions, as well as the appearance of continued sputtering into the substrate. Applying measured single component sputter rates to the bilayer films with a step change in sputter rate at the interfaces yields more accurate film thickness and interface positions. The transformation can be further improved by applying a linear sputter rate transition across the interface, thus modeling the sputter rate changes seen in polymer blends. This more closely reflects the expected sputtering behavior. This study highlights the need for both accurate evaluation of component sputter rates and the careful conversion of sputter time to depth, if accurate 3D reconstructions of complex multi-component organic and biological samples are to be achieved. The effects of errors in sputter rate determination are also explored. PMID:26185799

  11. Near Real-Time Solar Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, G.; Denker, C.; Wang, H.

    2001-05-01

    We use a Linux Beowulf cluster to build a system for near real-time solar image reconstruction with the goal to obtain diffraction limited solar images at a cadence of one minute. This gives us immediate access to high-level data products and enables direct visualization of dynamic processes on the Sun. Space weather warnings and flare forecasting will benefit from this project. The image processing algorithms are based on the speckle masking method combined with frame selection. The parallel programs use explicit message passing via Parallel Virtual Machine (PVM). The preliminary results are very promising. Now, we can construct a 256 by 256 pixel image out of 50 short-exposure images within one minute on a Beowulf cluster with four 500~MHz CPUs. In addition, we want to explore the possibility of applying parallel computing on Beowulf clusters to other complex data reduction and analysis problems that we encounter, e.g., in multi-dimensional spectro-polarimetry.

  12. Performance-based assessment of reconstructed images

    SciTech Connect

    Hanson, Kenneth

    2009-01-01

    During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.

  13. Hyperspectral image reconstruction for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Fantini, Sergio; Miller, Eric L.

    2011-01-01

    We explore the development and performance of algorithms for hyperspectral diffuse optical tomography (DOT) for which data from hundreds of wavelengths are collected and used to determine the concentration distribution of chromophores in the medium under investigation. An efficient method is detailed for forming the images using iterative algorithms applied to a linearized Born approximation model assuming the scattering coefficient is spatially constant and known. The L-surface framework is employed to select optimal regularization parameters for the inverse problem. We report image reconstructions using 126 wavelengths with estimation error in simulations as low as 0.05 and mean square error of experimental data of 0.18 and 0.29 for ink and dye concentrations, respectively, an improvement over reconstructions using fewer specifically chosen wavelengths. PMID:21483616

  14. Deep Reconstruction Models for Image Set Classification.

    PubMed

    Hayat, Munawar; Bennamoun, Mohammed; An, Senjian

    2015-04-01

    Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289

  15. Joint reconstruction of absorption and refractive properties in propagation-based x-ray phase-contrast tomography via a non-linear image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Wang, Kun; Gursoy, Doga; Soriano, Carmen; De Carlo, Francesco; Anastasio, Mark A.

    2016-03-01

    Propagation-based X-ray phase-contrast tomography (XPCT) provides the opportunity to image weakly absorbing objects and is being explored actively for a variety of important pre-clinical applications. Quantitative XPCT image reconstruction methods typically involve a phase retrieval step followed by application of an image reconstruction algorithm. Most approaches to phase retrieval require either acquiring multiple images at different object-to-detector distances or introducing simplifying assumptions, such as a single-material assumption, to linearize the imaging model. In order to overcome these limitations, a non-linear image reconstruction method has been proposed previously that jointly estimates the absorption and refractive properties of an object from XPCT projection data acquired at a single propagation distance, without the need to linearize the imaging model. However, the numerical properties of the associated non-convex optimization problem remain largely unexplored. In this study, computer simulations are conducted to investigate the feasibility of the joint reconstruction problem in practice. We demonstrate that the joint reconstruction problem is ill-posed and sensitive to system inconsistencies. Particularly, the method can generate accurate refractive index images only if the object is thin and has no phase-wrapping in the data. However, we also observed that, for weakly absorbing objects, the refractive index images reconstructed by the joint reconstruction method are, in general, more accurate than those reconstructed using methods that simply ignore the object's absorption.

  16. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  17. Particle Image Velocimetry Measurements in an Anatomically-Accurate Scaled Model of the Mammalian Nasal Cavity

    NASA Astrophysics Data System (ADS)

    Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent

    2013-11-01

    The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.

  18. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  19. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  20. High resolution reconstruction of solar prominence images observed by the New Vacuum Solar Telescope

    NASA Astrophysics Data System (ADS)

    Xiang, Yong-yuan; Liu, Zhong; Jin, Zhen-yu

    2016-11-01

    A high resolution image showing fine structures is crucial for understanding the nature of solar prominence. In this paper, high resolution imaging of solar prominence on the New Vacuum Solar Telescope (NVST) is introduced, using speckle masking. Each step of the data reduction especially the image alignment is discussed. Accurate alignment of all frames and the non-isoplanatic calibration of each image are the keys for a successful reconstruction. Reconstructed high resolution images from NVST also indicate that under normal seeing condition, it is feasible to carry out high resolution observations of solar prominence by a ground-based solar telescope, even in the absence of adaptive optics.

  1. An Improved Total Variation Minimization Method Using Prior Images and Split-Bregman Method in CT Reconstruction

    PubMed Central

    2016-01-01

    Compressive Sensing (CS) theory has great potential for reconstructing Computed Tomography (CT) images from sparse-views projection data and Total Variation- (TV-) based CT reconstruction method is very popular. However, it does not directly incorporate prior images into the reconstruction. To improve the quality of reconstructed images, this paper proposed an improved TV minimization method using prior images and Split-Bregman method in CT reconstruction, which uses prior images to obtain valuable previous information and promote the subsequent imaging process. The images obtained asynchronously were registered via Locally Linear Embedding (LLE). To validate the method, two studies were performed. Numerical simulation using an abdomen phantom has been used to demonstrate that the proposed method enables accurate reconstruction of image objects under sparse projection data. A real dataset was used to further validate the method. PMID:27689076

  2. An Improved Total Variation Minimization Method Using Prior Images and Split-Bregman Method in CT Reconstruction

    PubMed Central

    2016-01-01

    Compressive Sensing (CS) theory has great potential for reconstructing Computed Tomography (CT) images from sparse-views projection data and Total Variation- (TV-) based CT reconstruction method is very popular. However, it does not directly incorporate prior images into the reconstruction. To improve the quality of reconstructed images, this paper proposed an improved TV minimization method using prior images and Split-Bregman method in CT reconstruction, which uses prior images to obtain valuable previous information and promote the subsequent imaging process. The images obtained asynchronously were registered via Locally Linear Embedding (LLE). To validate the method, two studies were performed. Numerical simulation using an abdomen phantom has been used to demonstrate that the proposed method enables accurate reconstruction of image objects under sparse projection data. A real dataset was used to further validate the method.

  3. Neural portraits of perception: reconstructing face images from evoked brain activity.

    PubMed

    Cowen, Alan S; Chun, Marvin M; Kuhl, Brice A

    2014-07-01

    Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity. While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex. However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions. Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network. Thus, we investigated (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and (b) whether this could be achieved even when excluding activity within occipital cortex. Our approach involved four steps. (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces. (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces. (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores. (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing 'offline' visual experiences-including dreams, memories, and imagination-which are chiefly represented in higher-level cortical areas.

  4. A Quadratic Spline based Interface (QUASI) reconstruction algorithm for accurate tracking of two-phase flows

    NASA Astrophysics Data System (ADS)

    Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.

    2009-12-01

    A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.

  5. On the Use of Uavs in Mining and Archaeology - Geo-Accurate 3d Reconstructions Using Various Platforms and Terrestrial Views

    NASA Astrophysics Data System (ADS)

    Tscharf, A.; Rumpler, M.; Fraundorfer, F.; Mayer, G.; Bischof, H.

    2015-08-01

    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to

  6. Calibrating X-ray Imaging Devices for Accurate Intensity Measurement

    SciTech Connect

    Haugh, M. J.

    2011-07-28

    The purpose of the project presented is to develop methods to accurately calibrate X-ray imaging devices. The approach was to develop X-ray source systems suitable for this endeavor and to develop methods to calibrate solid state detectors to measure source intensity. NSTec X-ray sources used for the absolute calibration of cameras are described, as well as the method of calibrating the source by calibrating the detectors. The work resulted in calibration measurements for several types of X-ray cameras. X-ray camera calibration measured efficiency and efficiency variation over the CCD. Camera types calibrated include: CCD, CID, back thinned (back illuminated), front illuminated.

  7. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  8. Rec-DCM-Eigen: Reconstructing a Less Parsimonious but More Accurate Tree in Shorter Time

    PubMed Central

    Kang, Seunghwa; Tang, Jijun; Schaeffer, Stephen W.; Bader, David A.

    2011-01-01

    Maximum parsimony (MP) methods aim to reconstruct the phylogeny of extant species by finding the most parsimonious evolutionary scenario using the species' genome data. MP methods are considered to be accurate, but they are also computationally expensive especially for a large number of species. Several disk-covering methods (DCMs), which decompose the input species to multiple overlapping subgroups (or disks), have been proposed to solve the problem in a divide-and-conquer way. We design a new DCM based on the spectral method and also develop the COGNAC (Comparing Orders of Genes using Novel Algorithms and high-performance Computers) software package. COGNAC uses the new DCM to reduce the phylogenetic tree search space and selects an output tree from the reduced search space based on the MP principle. We test the new DCM using gene order data and inversion distance. The new DCM not only reduces the number of candidate tree topologies but also excludes erroneous tree topologies which can be selected by original MP methods. Initial labeling of internal genomes affects the accuracy of MP methods using gene order data, and the new DCM enables more accurate initial labeling as well. COGNAC demonstrates superior accuracy as a consequence. We compare COGNAC with FastME and the combination of the state of the art DCM (Rec-I-DCM3) and GRAPPA . COGNAC clearly outperforms FastME in accuracy. COGNAC –using the new DCM–also reconstructs a much more accurate tree in significantly shorter time than GRAPPA with Rec-I-DCM3. PMID:21887219

  9. Rec-DCM-Eigen: reconstructing a less parsimonious but more accurate tree in shorter time.

    PubMed

    Kang, Seunghwa; Tang, Jijun; Schaeffer, Stephen W; Bader, David A

    2011-01-01

    Maximum parsimony (MP) methods aim to reconstruct the phylogeny of extant species by finding the most parsimonious evolutionary scenario using the species' genome data. MP methods are considered to be accurate, but they are also computationally expensive especially for a large number of species. Several disk-covering methods (DCMs), which decompose the input species to multiple overlapping subgroups (or disks), have been proposed to solve the problem in a divide-and-conquer way. We design a new DCM based on the spectral method and also develop the COGNAC (Comparing Orders of Genes using Novel Algorithms and high-performance Computers) software package. COGNAC uses the new DCM to reduce the phylogenetic tree search space and selects an output tree from the reduced search space based on the MP principle. We test the new DCM using gene order data and inversion distance. The new DCM not only reduces the number of candidate tree topologies but also excludes erroneous tree topologies which can be selected by original MP methods. Initial labeling of internal genomes affects the accuracy of MP methods using gene order data, and the new DCM enables more accurate initial labeling as well. COGNAC demonstrates superior accuracy as a consequence. We compare COGNAC with FastME and the combination of the state of the art DCM (Rec-I-DCM3) and GRAPPA. COGNAC clearly outperforms FastME in accuracy. COGNAC--using the new DCM--also reconstructs a much more accurate tree in significantly shorter time than GRAPPA with Rec-I-DCM3.

  10. Effects of scatter radiation on reconstructed images in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Liu, Bob; Li, Xinhua

    2009-02-01

    We evaluated the effects of scatter radiation on the reconstructed images in digital breast tomosynthesis. Projection images of a 6 cm anthropomorphic breast phantom were acquired using a Hologic prototype digital breast tomosynthesis system. Scatter intensities in projection images were sampled with a beam stop method. The scatter intensity at any pixel was obtained by two dimensional fitting. Primary-only projection images were generated by subtracting the scatter contributions from the original projection images. The 3-dimensional breast was reconstructed first based on original projection images which contained the contributions from both primary rays and scattered radiation using three different reconstruction algorithms. The same breast volume was reconstructed again using the same algorithms but based on primaryonly projection images. The image artifacts, pixel value difference to noise ratio (PDNR), and detected image features in these two sets of reconstructed slices were compared to evaluate the effects of scatter radiation. It was found that the scatter radiation caused inaccurate reconstruction of the x-ray attenuation property of the tissue. X-ray attenuation coefficients could be significantly underestimated in the region where scatter intensity is high. This phenomenon is similar to the cupping artifacts found in computed tomography. The scatter correction is important if accurate x-ray attenuation of the tissues is needed. No significant improvement in terms of numbers of detected image features was observed after scatter correction. More sophisticated phantom dedicated to digital breast tomosynthesis may be needed for further evaluation.

  11. SU-E-I-73: Clinical Evaluation of CT Image Reconstructed Using Interior Tomography

    SciTech Connect

    Zhang, J; Ge, G; Winkler, M; Cong, W; Wang, G

    2014-06-01

    Purpose: Radiation dose reduction has been a long standing challenge in CT imaging of obese patients. Recent advances in interior tomography (reconstruction of an interior region of interest (ROI) from line integrals associated with only paths through the ROI) promise to achieve significant radiation dose reduction without compromising image quality. This study is to investigate the application of this technique in CT imaging through evaluating imaging quality reconstructed from patient data. Methods: Projection data were directly obtained from patients who had CT examinations in a Dual Source CT scanner (DSCT). Two detectors in a DSCT acquired projection data simultaneously. One detector provided projection data for full field of view (FOV, 50 cm) while another detectors provided truncated projection data for a FOV of 26 cm. Full FOV CT images were reconstructed using both filtered back projection and iterative algorithm; while interior tomography algorithm was implemented to reconstruct ROI images. For comparison reason, FBP was also used to reconstruct ROI images. Reconstructed CT images were evaluated by radiologists and compared with images from CT scanner. Results: The results show that the reconstructed ROI image was in excellent agreement with the truth inside the ROI, obtained from images from CT scanner, and the detailed features in the ROI were quantitatively accurate. Radiologists evaluation shows that CT images reconstructed with interior tomography met diagnosis requirements. Radiation dose may be reduced up to 50% using interior tomography, depending on patient size. Conclusion: This study shows that interior tomography can be readily employed in CT imaging for radiation dose reduction. It may be especially useful in imaging obese patients, whose subcutaneous tissue is less clinically relevant but may significantly increase radiation dose.

  12. Three-dimensional reconstruction of light microscopy image sections: present and future.

    PubMed

    Wang, Yuzhen; Xu, Rui; Luo, Gaoxing; Wu, Jun

    2015-03-01

    Three-dimensional (3D) image reconstruction technologies can reveal previously hidden microstructures in human tissue. However, the lack of ideal, non-destructive cross-sectional imaging techniques is still a problem. Despite some drawbacks, histological sectioning remains one of the most powerful methods for accurate high-resolution representation of tissue structures. Computer technologies can produce 3D representations of interesting human tissue and organs that have been serial-sectioned, dyed or stained, imaged, and segmented for 3D visualization. 3D reconstruction also has great potential in the fields of tissue engineering and 3D printing. This article outlines the most common methods for 3D tissue section reconstruction. We describe the most important academic concepts in this field, and provide critical explanations and comparisons. We also note key steps in the reconstruction procedures, and highlight recent progress in the development of new reconstruction methods.

  13. Three-dimensional reconstruction of light microscopy image sections: present and future.

    PubMed

    Wang, Yuzhen; Xu, Rui; Luo, Gaoxing; Wu, Jun

    2015-03-01

    Three-dimensional (3D) image reconstruction technologies can reveal previously hidden microstructures in human tissue. However, the lack of ideal, non-destructive cross-sectional imaging techniques is still a problem. Despite some drawbacks, histological sectioning remains one of the most powerful methods for accurate high-resolution representation of tissue structures. Computer technologies can produce 3D representations of interesting human tissue and organs that have been serial-sectioned, dyed or stained, imaged, and segmented for 3D visualization. 3D reconstruction also has great potential in the fields of tissue engineering and 3D printing. This article outlines the most common methods for 3D tissue section reconstruction. We describe the most important academic concepts in this field, and provide critical explanations and comparisons. We also note key steps in the reconstruction procedures, and highlight recent progress in the development of new reconstruction methods. PMID:24952302

  14. Image reconstruction using projections from a few views by discrete steering combined with DART

    NASA Astrophysics Data System (ADS)

    Kwon, Junghyun; Song, Samuel M.; Kauke, Brian; Boyd, Douglas P.

    2012-03-01

    In this paper, we propose an algebraic reconstruction technique (ART) based discrete tomography method to reconstruct an image accurately using projections from a few views. We specifically consider the problem of reconstructing an image of bottles filled with various types of liquids from X-ray projections. By exploiting the fact that bottles are usually filled with homogeneous material, we show that it is possible to obtain accurate reconstruction with only a few projections by an ART based algorithm. In order to deal with various types of liquids in our problem, we first introduce our discrete steering method which is a generalization of the binary steering approach for our proposed multi-valued discrete reconstruction. The main idea of the steering approach is to use slowly varying thresholds instead of fixed thresholds. We further improve reconstruction accuracy by reducing the number of variables in ART by combining our discrete steering with the discrete ART (DART) that fixes the values of interior pixels of segmented regions considered as reliable. By simulation studies, we show that our proposed discrete steering combined with DART yields superior reconstruction than both discrete steering only and DART only cases. The resulting reconstructions are quite accurate even with projections using only four views.

  15. A novel automated image analysis method for accurate adipocyte quantification

    PubMed Central

    Osman, Osman S; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J; O’Dowd, Jacqueline F; Cawthorne, Michael A; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-sectional area of adipocytes in histological sections, allowing rapid high-throughput quantification of fat cell size and number. Photomicrographs of H&E-stained paraffin sections of murine gonadal adipose were transformed using standard image processing/analysis algorithms to reduce background and enhance edge-detection. This allowed the isolation of individual adipocytes from which their area could be calculated. Performance was compared with manual measurements made from the same images, in which adipocyte area was calculated from estimates of the major and minor axes of individual adipocytes. Both methods identified an increase in mean adipocyte size in a murine model of obesity, with good concordance, although the calculation used to identify cell area from manual measurements was found to consistently over-estimate cell size. Here we report an accurate method to determine adipocyte area in histological sections that provides a considerable time saving over manual methods. PMID:23991362

  16. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  17. Image reconstruction of IRAS survey scans

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. Romke

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  18. Visual image reconstruction from human brain activity: A modular decoding approach

    NASA Astrophysics Data System (ADS)

    Miyawaki, Yoichi; Uchida, Hajime; Yamashita, Okito; Sato, Masa-aki; Morito, Yusuke; Tanabe, Hiroki C.; Sadato, Norihiro; Kamitani, Yukiyasu

    2009-12-01

    Brain activity represents our perceptual experience. But the potential for reading out perceptual contents from human brain activity has not been fully explored. In this study, we demonstrate constraint-free reconstruction of visual images perceived by a subject, from the brain activity pattern. We reconstructed visual images by combining local image bases with multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2100 possible states), were accurately reconstructed without any image prior by measuring brain activity only for several hundred random images. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multi-voxel patterns.

  19. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    NASA Astrophysics Data System (ADS)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the

  20. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  1. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  2. Effects of uncertainty in camera geometry on three-dimensional catheter reconstruction from biplane fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Dietz, Anthony; Kynor, David B.; Friets, Eric; Triedman, John; Hammer, Peter

    2002-05-01

    Clinical procedures that rely on biplane x-ray images for three-dimensional (3-D) information may be enhanced by three-dimensional reconstructions. However, the accuracy of reconstructed images is dependent on the uncertainty associated with the parameters that define the geometry of the camera system. In this paper, we use a numerical simulation to examine the effect of these uncertainties and to determine the limits required for adequate three-dimensional reconstruction. We then test our conclusions with images of a calibration phantom recorded using a clinical system. A set of reconstruction routines, developed for a cardiac mapping system, were used in this evaluation. The routines include procedures for correcting image distortion and for automatically locating catheter electrodes. Test images were created using a numerical simulation of a biplane x-ray projection system. The reconstruction routines were then applied using accurate and perturbed camera geometries and error maps were produced. Our results indicate that useful catheter reconstructions are possible with reasonable bounds on the uncertainty of camera geometry provided the locations of the camera isocenters are accurate. The results of this study provide a guide for the specification of camera geometry display systems and for researchers evaluating possible methodologies for determining camera geometry.

  3. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  4. A new reconstruction strategy for image improvement in pinhole SPECT.

    PubMed

    Zeniya, Tsutomu; Watabe, Hiroshi; Aoi, Toshiyuki; Kim, Kyeong Min; Teramoto, Noboru; Hayashi, Takuya; Sohlberg, Antti; Kudo, Hiroyuki; Iida, Hidehiro

    2004-08-01

    Pinhole single-photon emission computed tomography (SPECT) is able to provide information on the biodistribution of several radioligands in small laboratory animals, but has limitations associated with non-uniform spatial resolution or axial blurring. We have hypothesised that this blurring is due to incompleteness of the projection data acquired by a single circular pinhole orbit, and have evaluated a new strategy for accurate image reconstruction with better spatial resolution uniformity. A pinhole SPECT system using two circular orbits and a dedicated three-dimensional ordered subsets expectation maximisation (3D-OSEM) reconstruction method were developed. In this system, not the camera but the object rotates, and the two orbits are at 90 degrees and 45 degrees relative to the object's axis. This system satisfies Tuy's condition, and is thus able to provide complete data for 3D pinhole SPECT reconstruction within the whole field of view (FOV). To evaluate this system, a series of experiments was carried out using a multiple-disk phantom filled with 99mTc solution. The feasibility of the proposed method for small animal imaging was tested with a mouse bone study using 99mTc-hydroxymethylene diphosphonate. Feldkamp's filtered back-projection (FBP) method and the 3D-OSEM method were applied to these data sets, and the visual and statistical properties were examined. Axial blurring, which was still visible at the edge of the FOV even after applying the conventional 3D-OSEM instead of FBP for single-orbit data, was not visible after application of 3D-OSEM using two-orbit data. 3D-OSEM using two-orbit data dramatically reduced the resolution non-uniformity and statistical noise, and also demonstrated considerably better image quality in the mouse scan. This system may be of use in quantitative assessment of bio-physiological functions in small animals.

  5. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  6. MMSE Reconstruction for 3D Freehand Ultrasound Imaging

    PubMed Central

    Huang, Wei; Zheng, Yibin

    2008-01-01

    The reconstruction of 3D ultrasound (US) images from mechanically registered, but otherwise irregularly positioned, B-scan slices is of great interest in image guided therapy procedures. Conventional 3D ultrasound algorithms have low computational complexity, but the reconstructed volume suffers from severe speckle contamination. Furthermore, the current method cannot reconstruct uniform high-resolution data from several low-resolution B-scans. In this paper, the minimum mean-squared error (MMSE) method is applied to 3D ultrasound reconstruction. Data redundancies due to overlapping samples as well as correlation of the target and speckle are naturally accounted for in the MMSE reconstruction algorithm. Thus, the reconstruction process unifies the interpolation and spatial compounding. Simulation results for synthetic US images are presented to demonstrate the excellent reconstruction. PMID:18382623

  7. Total variation minimization-based multimodality medical image reconstruction

    NASA Astrophysics Data System (ADS)

    Cui, Xuelin; Yu, Hengyong; Wang, Ge; Mili, Lamine

    2014-09-01

    Since its recent inception, simultaneous image reconstruction for multimodality fusion has received a great deal of attention due to its superior imaging performance. On the other hand, the compressed sensing (CS)-based image reconstruction methods have undergone a rapid development because of their ability to significantly reduce the amount of raw data. In this work, we combine computed tomography (CT) and magnetic resonance imaging (MRI) into a single CS-based reconstruction framework. From a theoretical viewpoint, the CS-based reconstruction methods require prior sparsity knowledge to perform reconstruction. In addition to the conventional data fidelity term, the multimodality imaging information is utilized to improve the reconstruction quality. Prior information in this context is that most of the medical images can be approximated as piecewise constant model, and the discrete gradient transform (DGT), whose norm is the total variation (TV), can serve as a sparse representation. More importantly, the multimodality images from the same object must share structural similarity, which can be captured by DGT. The prior information on similar distributions from the sparse DGTs is employed to improve the CT and MRI image quality synergistically for a CT-MRI scanner platform. Numerical simulation with undersampled CT and MRI datasets is conducted to demonstrate the merits of the proposed hybrid image reconstruction approach. Our preliminary results confirm that the proposed method outperforms the conventional CT and MRI reconstructions when they are applied separately.

  8. Tree STEM Reconstruction Using Vertical Fisheye Images: a Preliminary Study

    NASA Astrophysics Data System (ADS)

    Berveglieri, A.; Tommaselli, A. M. G.

    2016-06-01

    A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM) technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.

  9. Lunar Surface Reconstruction from Apollo MC Images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.; Al-Jarrah, Ahmad; Walker, Kyle

    2015-07-01

    The last three Apollo lunar missions (15, 16, and 17) carried an integrated photogrammetric mapping system of a metric camera (MC), a high-resolution panoramic camera, a star camera, and a laser altimeter. Recently images taken by the MC were scanned by Arizona State University (ASU); these images contain valuable information for scientific exploration, engineering analysis, and visualization of the Moon's surface. In this article, we took advantage of the large overlaps, the multi viewing, and the high ground resolution of the images taken by the Apollo MC in generating an accurate and reliable surface of the Moon. We started by computing the relative positions and orientations of the exposure stations through a rigorous photogrammetric bundle adjustment process. We then generated a surface model using a hierarchical correlation-based matching algorithm. The matching algorithm was implemented in a multi-photo scheme and permits the exclusion of obscured pixels. The generated surface model was registered with LOLA topographic data and the comparison between the two surfaces yielded an average absolute difference of 36 m. These results look very promising and demonstrate the effectiveness of the proposed algorithm in accounting for depth discontinuities, occlusions, and image-signal noise.

  10. Noise and resolution of Bayesian reconstruction for multiple image configurations

    SciTech Connect

    Chinn, G.; Huang, Sung Cheng

    1993-12-01

    Images reconstructed by Bayesian and maximum-likelihood (ML) using a Gibbs prior with prior weight {beta} were compared with images produced by filtered back projection (FBP) from sinogram data simulated with different counts and image configurations. Bayesian images were generated by the OSL algorithm accelerated by an over relaxation parameter. For relatively low {beta}, Bayesian images can yield an overall improvement to the images compared to ML reconstruction. However, for larger {beta}, Bayesian images degrade from the standpoint of noise and quantitation. Compared to FBP, the ML images were superior in a mean square error sense in regions of low activity level and for small structures. At a comparable noise level to FBP, Bayesian reconstruction can be used to effectively recover higher resolution images. The overall performance is dependent on the image structure and the weight of the Bayesian prior.

  11. Terrain reconstruction from Chang'e-3 PCAM images

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Rui; Ren, Xin; Wang, Fen-Fei; Liu, Jian-Jun; Li, Chun-Lai

    2015-07-01

    The existing terrain models that describe the local lunar surface have limited resolution and accuracy, which can hardly meet the needs of rover navigation, positioning and geological analysis. China launched the lunar probe Chang'e-3 in December, 2013. Chang'e-3 encompassed a lander and a lunar rover called “Yutu” (Jade Rabbit). A set of panoramic cameras were installed on the rover mast. After acquiring panoramic images of four sites that were explored, the terrain models of the local lunar surface with resolution of 0.02m were reconstructed. Compared with other data sources, the models derived from Chang'e-3 data were clear and accurate enough that they could be used to plan the route of Yutu. Supported by the National Natural Science Foundation of China.

  12. Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)

    NASA Technical Reports Server (NTRS)

    Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy

    2012-01-01

    The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.

  13. Numerical modelling and image reconstruction in diffuse optical tomography

    PubMed Central

    Dehghani, Hamid; Srinivasan, Subhadra; Pogue, Brian W.; Gibson, Adam

    2009-01-01

    The development of diffuse optical tomography as a functional imaging modality has relied largely on the use of model-based image reconstruction. The recovery of optical parameters from boundary measurements of light propagation within tissue is inherently a difficult one, because the problem is nonlinear, ill-posed and ill-conditioned. Additionally, although the measured near-infrared signals of light transmission through tissue provide high imaging contrast, the reconstructed images suffer from poor spatial resolution due to the diffuse propagation of light in biological tissue. The application of model-based image reconstruction is reviewed in this paper, together with a numerical modelling approach to light propagation in tissue as well as generalized image reconstruction using boundary data. A comprehensive review and details of the basis for using spatial and structural prior information are also discussed, whereby the use of spectral and dual-modality systems can improve contrast and spatial resolution. PMID:19581256

  14. Reconstruction of biofilm images: combining local and global structural parameters

    SciTech Connect

    Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk

    2014-11-07

    Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parameters into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.

  15. PAINTER: a spatiospectral image reconstruction algorithm for optical interferometry.

    PubMed

    Schutz, Antony; Ferrari, André; Mary, David; Soulez, Ferréol; Thiébaut, Éric; Vannier, Martin

    2014-11-01

    Astronomical optical interferometers sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid perturbations caused by atmospheric turbulence, the phases of the complex Fourier samples (visibilities) cannot be directly exploited. Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic optical interferometric instruments are now paving the way to multiwavelength imaging. This paper is devoted to the derivation of a spatiospectral (3D) image reconstruction algorithm, coined Polychromatic opticAl INTErferometric Reconstruction software (PAINTER). The algorithm relies on an iterative process, which alternates estimation of polychromatic images and complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also differential phases, which helps to better constrain the polychromatic reconstruction. Simulations on synthetic data illustrate the efficiency of the algorithm and, in particular, the relevance of injecting a differential phases model in the reconstruction.

  16. An accurate algorithm to match imperfectly matched images for lung tumor detection without markers.

    PubMed

    Rozario, Timothy; Bereg, Sergey; Yan, Yulong; Chiu, Tsuicheng; Liu, Honghuan; Kearney, Vasant; Jiang, Lan; Mao, Weihua

    2015-05-08

    In order to locate lung tumors on kV projection images without internal markers, digitally reconstructed radiographs (DRRs) are created and compared with projection images. However, lung tumors always move due to respiration and their locations change on projection images while they are static on DRRs. In addition, global image intensity discrepancies exist between DRRs and projections due to their different image orientations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported to match imperfectly matched projection images and DRRs. The kV projection images were matched with different DRRs in two steps. Preprocessing was performed in advance to generate two sets of DRRs. The tumors were removed from the planning 3D CT for a single phase of planning 4D CT images using planning contours of tumors. DRRs of background and DRRs of tumors were generated separately for every projection angle. The first step was to match projection images with DRRs of background signals. This method divided global images into a matrix of small tiles and similarities were evaluated by calculating normalized cross-correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) was automatically optimized to keep the tumor within a single projection tile that had a bad matching with the corresponding DRR tile. A pixel-based linear transformation was determined by linear interpolations of tile transformation results obtained during tile matching. The background DRRs were transformed to the projection image level and subtracted from it. The resulting subtracted image now contained only the tumor. The second step was to register DRRs of tumors to the subtracted image to locate the tumor. This method was successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (BrainLAB) for dynamic tumor tracking on phantom studies. Radiation opaque markers were

  17. Algebraic Reconstruction Technique (ART) for parallel imaging reconstruction of undersampled radial data: Application to cardiac cine

    PubMed Central

    Li, Shu; Chan, Cheong; Stockmann, Jason P.; Tagare, Hemant; Adluru, Ganesh; Tam, Leo K.; Galiana, Gigi; Constable, R. Todd; Kozerke, Sebastian; Peters, Dana C.

    2014-01-01

    Purpose To investigate algebraic reconstruction technique (ART) for parallel imaging reconstruction of radial data, applied to accelerated cardiac cine. Methods A GPU-accelerated ART reconstruction was implemented and applied to simulations, point spread functions (PSF) and in twelve subjects imaged with radial cardiac cine acquisitions. Cine images were reconstructed with radial ART at multiple undersampling levels (192 Nr x Np = 96 to 16). Images were qualitatively and quantitatively analyzed for sharpness and artifacts, and compared to filtered back-projection (FBP), and conjugate gradient SENSE (CG SENSE). Results Radial ART provided reduced artifacts and mainly preserved spatial resolution, for both simulations and in vivo data. Artifacts were qualitatively and quantitatively less with ART than FBP using 48, 32, and 24 Np, although FBP provided quantitatively sharper images at undersampling levels of 48-24 Np (all p<0.05). Use of undersampled radial data for generating auto-calibrated coil-sensitivity profiles resulted in slightly reduced quality. ART was comparable to CG SENSE. GPU-acceleration increased ART reconstruction speed 15-fold, with little impact on the images. Conclusion GPU-accelerated ART is an alternative approach to image reconstruction for parallel radial MR imaging, providing reduced artifacts while mainly maintaining sharpness compared to FBP, as shown by its first application in cardiac studies. PMID:24753213

  18. Sub-angstrom microscopy through incoherent imaging and image reconstruction

    NASA Astrophysics Data System (ADS)

    Pennycook, S. J.; Jesson, D. E.; Chisholm, M. F.; Ferridge, A. G.; Seddon, M. J.

    1992-03-01

    Z-contrast scanning transmission electron microscopy (STEM) with a high-angle annular detector breaks the coherence of the imaging process, and provides an incoherent image of a crystal projection. Even in the presence of strong dynamical diffraction, the image can be accurately described as a convolution between an object function, sharply peaked at the projected atomic sites, and the probe intensity profile. Such an image can be inverted intuitively without the need for model structures, and therefore provides the important capability to reveal unanticipated interfacial arrangements. It represents a direct image of the crystal projection, revealing the location of the atomic columns and their relative high-angle scattering power. Since no phase is associated with a peak in the object function or the contrast transfer function, extension to higher resolution is also straightforward. Image restoration techniques such as maximum entropy, in conjunction with the 1.3 (Angstrom) probe anticipated for a 300 kV STEM, appear to provide a simple and robust route to the achievement of sub-(Angstrom) resolution electron microscopy.

  19. Electrical CT image reconstruction technique for powder flow in petroleum refinery process

    NASA Astrophysics Data System (ADS)

    Takei, Masahiro; Doh, Deog-Hee; Ochi, Mitsuaki

    2008-03-01

    A new reconstruction method called sampled pattern matching (SPM) was applied to the image reconstruction of an electrical capacitance computed tomography in powder flow in a vertical pipe for petroleum refinery process. This new method is able to achieve stable convergence without the use of an empirical value. Experiments were carried out using fluid catalytic cracking (FCC) catalysts as powder with two air volume flow rates and four powder volume flow rates to measure the capacitance of a pipe cross section. The SPM method is compared with conventional methods in terms of volume fraction, residual capacitance, and correlation capacitance. Overall, the SPM method proved superior to conventional methods without any empirical value because SPM achieves accurate reconstruction by using an objective function that is calculated as the inner product calculation between the experimental capacitance and the reconstructed image capacitance.

  20. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    PubMed

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.

  1. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  2. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  3. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    NASA Astrophysics Data System (ADS)

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  4. Robust image reconstruction enhancement based on Gaussian mixture model estimation

    NASA Astrophysics Data System (ADS)

    Zhao, Fan; Zhao, Jian; Han, Xizhen; Wang, He; Liu, Bochao

    2016-03-01

    The low quality of an image is often characterized by low contrast and blurred edge details. Gradients have a direct relationship with image edge details. More specifically, the larger the gradients, the clearer the image details become. Robust image reconstruction enhancement based on Gaussian mixture model estimation is proposed here. First, image is transformed to its gradient domain, obtaining the gradient histogram. Second, the gradient histogram is estimated and extended using a Gaussian mixture model, and the predetermined function is constructed. Then, using histogram specification technology, the gradient field is enhanced with the constraint of the predetermined function. Finally, a matrix sine transform-based method is applied to reconstruct the enhanced image from the enhanced gradient field. Experimental results show that the proposed algorithm can effectively enhance different types of images such as medical image, aerial image, and visible image, providing high-quality image information for high-level processing.

  5. Basis Functions in Image Reconstruction From Projections: A Tutorial Introduction

    NASA Astrophysics Data System (ADS)

    Herman, Gabor T.

    2015-11-01

    The series expansion approaches to image reconstruction from projections assume that the object to be reconstructed can be represented as a linear combination of fixed basis functions and the task of the reconstruction algorithm is to estimate the coefficients in such a linear combination based on the measured projection data. It is demonstrated that using spherically symmetric basis functions (blobs), instead of ones based on the more traditional pixels, yields superior reconstructions of medically relevant objects. The demonstration uses simulated computerized tomography projection data of head cross-sections and the series expansion method ART for the reconstruction. In addition to showing the results of one anecdotal example, the relative efficacy of using pixel and blob basis functions in image reconstruction from projections is also evaluated using a statistical hypothesis testing based task oriented comparison methodology. The superiority of the efficacy of blob basis functions over that of pixel basis function is found to be statistically significant.

  6. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  7. Iterative Image Reconstruction for Limited-Angle CT Using Optimized Initial Image

    PubMed Central

    Guo, Jingyu; Qi, Hongliang; Xu, Yuan; Chen, Zijia; Li, Shulong; Zhou, Linghong

    2016-01-01

    Limited-angle computed tomography (CT) has great impact in some clinical applications. Existing iterative reconstruction algorithms could not reconstruct high-quality images, leading to severe artifacts nearby edges. Optimal selection of initial image would influence the iterative reconstruction performance but has not been studied deeply yet. In this work, we proposed to generate optimized initial image followed by total variation (TV) based iterative reconstruction considering the feature of image symmetry. The simulated data and real data reconstruction results indicate that the proposed method effectively removes the artifacts nearby edges. PMID:27066107

  8. Artificial neural network Radon inversion for image reconstruction.

    PubMed

    Rodriguez, A F; Blass, W E; Missimer, J H; Leenders, K L

    2001-04-01

    Image reconstruction techniques are essential to computer tomography. Algorithms such as filtered backprojection (FBP) or algebraic techniques are most frequently used. This paper presents an attempt to apply a feed-forward back-propagation supervised artificial neural network (BPN) to tomographic image reconstruction, specifically to positron emission tomography (PET). The main result is that the network trained with Gaussian test images proved to be successful at reconstructing images from projection sets derived from arbitrary objects. Additional results relate to the design of the network and the full width at half maximum (FWHM) of the Gaussians in the training sets. First, the optimal number of nodes in the middle layer is about an order of magnitude less than the number of input or output nodes. Second, the number of iterations required to achieve a required training set tolerance appeared to decrease exponentially with the number of nodes in the middle layer. Finally, for training sets containing Gaussians of a single width, the optimal accuracy of reconstructing the control set is obtained with a FWHM of three pixels. Intended to explore feasibility, the BPN presented in the following does not provide reconstruction accuracy adequate for immediate application to PET. However, the trained network does reconstruct general images independent of the data with which it was trained. Proposed in the concluding section are several possible refinements that should permit the development of a network capable of fast reconstruction of three-dimensional images from the discrete, noisy projection data characteristic of PET.

  9. MREJ: MRE elasticity reconstruction on ImageJ.

    PubMed

    Xiang, Kui; Zhu, Xia Li; Wang, Chang Xin; Li, Bing Nan

    2013-08-01

    Magnetic resonance elastography (MRE) is a promising method for health evaluation and disease diagnosis. It makes use of elastic waves as a virtual probe to quantify soft tissue elasticity. The wave actuator, imaging modality and elasticity interpreter are all essential components for an MRE system. Efforts have been made to develop more effective actuating mechanisms, imaging protocols and reconstructing algorithms. However, translating MRE wave images into soft tissue elasticity is a nontrivial issue for health professionals. This study contributes an open-source platform - MREJ - for MRE image processing and elasticity reconstruction. It is established on the widespread image-processing program ImageJ. Two algorithms for elasticity reconstruction were implemented with spatiotemporal directional filtering. The usability of the method is shown through virtual palpation on different phantoms and patients. Based on the results, we conclude that MREJ offers the MRE community a convenient and well-functioning program for image processing and elasticity interpretation.

  10. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  11. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  12. Sparsity-constrained PET image reconstruction with learned dictionaries.

    PubMed

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging. PMID:27494441

  13. Surface reconstruction from microscopic images in optical lithography.

    PubMed

    Estellers, Virginia; Thiran, Jean-Philippe; Gabrani, Maria

    2014-08-01

    This paper presents a method to reconstruct 3D surfaces of silicon wafers from 2D images of printed circuits taken with a scanning electron microscope. Our reconstruction method combines the physical model of the optical acquisition system with prior knowledge about the shapes of the patterns in the circuit; the result is a shape-from-shading technique with a shape prior. The reconstruction of the surface is formulated as an optimization problem with an objective functional that combines a data-fidelity term on the microscopic image with two prior terms on the surface. The data term models the acquisition system through the irradiance equation characteristic of the microscope; the first prior is a smoothness penalty on the reconstructed surface, and the second prior constrains the shape of the surface to agree with the expected shape of the pattern in the circuit. In order to account for the variability of the manufacturing process, this second prior includes a deformation field that allows a nonlinear elastic deformation between the expected pattern and the reconstructed surface. As a result, the minimization problem has two unknowns, and the reconstruction method provides two outputs: 1) a reconstructed surface and 2) a deformation field. The reconstructed surface is derived from the shading observed in the image and the prior knowledge about the pattern in the circuit, while the deformation field produces a mapping between the expected shape and the reconstructed surface that provides a measure of deviation between the circuit design models and the real manufacturing process.

  14. Infrared Astronomical Satellite (IRAS) image reconstruction and restoration

    NASA Technical Reports Server (NTRS)

    Gonsalves, R. A.; Lyons, T. D.; Price, S. D.; Levan, P. D.; Aumann, H. H.

    1987-01-01

    IRAS sky mapping data is being reconstructed as images, and an entropy-based restoration algorithm is being applied in an attempt to improve spatial resolution in extended sources. Reconstruction requires interpolation of non-uniformly sampled data. Restoration is accomplished with an iterative algorithm which begins with an inverse filter solution and iterates on it with a weighted entropy-based spectral subtraction.

  15. Automatic lumen segmentation in IVOCT images using binary morphological reconstruction

    PubMed Central

    2013-01-01

    Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation. PMID:23937790

  16. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  17. Improved Diffusion Imaging through SNR-Enhancing Joint Reconstruction

    PubMed Central

    Haldar, Justin P.; Wedeen, Van J.; Nezamzadeh, Marzieh; Dai, Guangping; Weiner, Michael W.; Schuff, Norbert; Liang, Zhi-Pei

    2012-01-01

    Quantitative diffusion imaging is a powerful technique for the characterization of complex tissue microarchitecture. However, long acquisition times and limited signal-to-noise ratio (SNR) represent significant hurdles for many in vivo applications. This paper presents a new approach to reduce noise while largely maintaining resolution in diffusion weighted images, using a statistical reconstruction method that takes advantage of the high level of structural correlation observed in typical datasets. Compared to existing denoising methods, the proposed method performs reconstruction directly from the measured complex k-space data, allowing for Gaussian noise modeling and theoretical characterizations of the resolution and SNR of the reconstructed images. In addition, the proposed method is compatible with many different models of the diffusion signal (e.g., diffusion tensor modeling, q-space modeling, etc.). The joint reconstruction method can provide significant improvements in SNR relative to conventional reconstruction techniques, with a relatively minor corresponding loss in image resolution. Results are shown in the context of diffusion spectrum imaging tractography and diffusion tensor imaging, illustrating the potential of this SNR-enhancing joint reconstruction approach for a range of different diffusion imaging experiments. PMID:22392528

  18. Evaluation of similarity measures for reconstruction-based registration in image-guided radiotherapy and surgery

    SciTech Connect

    Skerl, Darko . E-mail: franjo.pernus@fe.uni-lj.si; Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2006-07-01

    Purpose: A promising patient positioning technique is based on registering computed tomographic (CT) or magnetic resonance (MR) images to cone-beam CT images (CBCT). The extra radiation dose delivered to the patient can be substantially reduced by using fewer projections. This approach results in lower quality CBCT images. The purpose of this study is to evaluate a number of similarity measures (SMs) suitable for registration of CT or MR images to low-quality CBCTs. Methods and Materials: Using the recently proposed evaluation protocol, we evaluated nine SMs with respect to pretreatment imaging modalities, number of two-dimensional (2D) images used for reconstruction, and number of reconstruction iterations. The image database consisted of 100 X-ray and corresponding CT and MR images of two vertebral columns. Results: Using a higher number of 2D projections or reconstruction iterations results in higher accuracy and slightly lower robustness. The similarity measures that behaved the best also yielded the best registration results. The most appropriate similarity measure was the asymmetric multi-feature mutual information (AMMI). Conclusions: The evaluation protocol proved to be a valuable tool for selecting the best similarity measure for the reconstruction-based registration. The results indicate that accurate and robust CT/CBCT or even MR/CBCT registrations are possible if the AMMI similarity measure is used.

  19. Image reconstruction from partial pseudo polar Fourier sampling based on alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Liu, Qiu-hong; Shu, Fan; Zhang, Wen-kun; Cai, Ai-long; Li, Lei; Yan, Bin

    2013-08-01

    Linear scan Computed Tomography (LCT) has emerged as a promising technique in fields like industrial scanning and security inspection due to its straight-line source trajectory and high scanning speed. However, in practical applications of LCT, the ordinary algorithms suffer from serious artifacts owing to the limited-angle and insufficient data. In this paper, a new method which reconstructs image from partial Fourier data sampled in pseudo polar grid based on alternating direction anisometric total variation minimization has been proposed. The main idea is to reform the image reconstruction problem into solving an under-determined linear equation, and then reconstruct image by applying the popular total variation (TV) minimization to reform an unconstraint optimization by means of augmented Lagrange method and using the alternating minimization method of multiplier (ADMM) which contributes to the fast convergence. The proposed method is practical in the large-scale task of reconstruction due to its algorithmic simplicity and computational efficiency and reconstructs better images. The results of the numerical simulations and pseudo real data reconstructions from the linear scan validate that the proposed method is both efficient and accurate.

  20. Effect of Object Orientation Angle on T2* Image and Reconstructed Magnetic Susceptibility: Numerical Simulations

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2013-01-01

    The magnetic field resulting from material magnetization in magnetic resonance imaging (MRI) has an object orientation effect, which produces an orientation dependence for acquired T2* images. On one hand, the orientation effect can be exploited for object anisotropy investigation (via multi-angle imaging); on the other hand, it is desirable to remove the orientation dependence using magnetic susceptibility reconstruction. In this report, we design a stick-star digital phantom to simulate multiple orientations of a stick-like object and use it to conduct various numerical simulations. Our simulations show that the object orientation effect is not propagated to the reconstructed magnetic susceptibility distribution. This suggests that accurate susceptibility reconstruction methods should be largely orientation independent. PMID:25114542

  1. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-01

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  2. Exponential filtering of singular values improves photoacoustic image reconstruction.

    PubMed

    Bhatt, Manish; Gutta, Sreedevi; Yalavarthy, Phaneendra K

    2016-09-01

    Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering. PMID:27607501

  3. Digital infrared thermal imaging following anterior cruciate ligament reconstruction.

    PubMed

    Barker, Lauren E; Markowski, Alycia M; Henneman, Kimberly

    2012-03-01

    This case describes the selective use of digital infrared thermal imaging for a 48-year-old woman who was being treated by a physical therapist following left anterior cruciate ligament (ACL) reconstruction with a semitendinosus autograft. PMID:22383168

  4. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  5. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  6. Combining Public Domain and Professional Panoramic Imagery for the Accurate and Dense 3d Reconstruction of the Destroyed Bel Temple in Palmyra

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.; Fangi, G.

    2016-06-01

    This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.

  7. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    PubMed

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  8. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  9. Advanced photoacoustic image reconstruction using the k-Wave toolbox

    NASA Astrophysics Data System (ADS)

    Treeby, B. E.; Jaros, J.; Cox, B. T.

    2016-03-01

    Reconstructing images from measured time domain signals is an essential step in tomography-mode photoacoustic imaging. However, in practice, there are many complicating factors that make it difficult to obtain high-resolution images. These include incomplete or undersampled data, filtering effects, acoustic and optical attenuation, and uncertainties in the material parameters. Here, the processing and image reconstruction steps routinely used by the Photoacoustic Imaging Group at University College London are discussed. These include correction for acoustic and optical attenuation, spatial resampling, material parameter selection, image reconstruction, and log compression. The effect of each of these steps is demonstrated using a representative in vivo dataset. All of the algorithms discussed form part of the open-source k-Wave toolbox (available from http://www.k-wave.org).

  10. Application of mathematical modelling methods for acoustic images reconstruction

    NASA Astrophysics Data System (ADS)

    Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.

    2016-04-01

    The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.

  11. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  12. Image Alignment for Tomography Reconstruction from Synchrotron X-Ray Microscopic Images

    PubMed Central

    Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai

    2014-01-01

    A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the “projected feature points” in the sequence of images. The matched projected feature points in the - plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx. PMID:24416264

  13. Super-Resolution Image Reconstruction Applied to Medical Ultrasound

    NASA Astrophysics Data System (ADS)

    Ellis, Michael

    Ultrasound is the preferred imaging modality for many diagnostic applications due to its real-time image reconstruction and low cost. Nonetheless, conventional ultrasound is not used in many applications because of limited spatial resolution and soft tissue contrast. Most commercial ultrasound systems reconstruct images using a simple delay-and-sum architecture on receive, which is fast and robust but does not utilize all information available in the raw data. Recently, more sophisticated image reconstruction methods have been developed that make use of far more information in the raw data to improve resolution and contrast. One such method is the Time-Domain Optimized Near-Field Estimator (TONE), which employs a maximum a priori estimation to solve a highly underdetermined problem, given a well-defined system model. TONE has been shown to significantly improve both the contrast and resolution of ultrasound images when compared to conventional methods. However, TONE's lack of robustness to variations from the system model and extremely high computational cost hinder it from being readily adopted in clinical scanners. This dissertation aims to reduce the impact of TONE's shortcomings, transforming it from an academic construct to a clinically viable image reconstruction algorithm. By altering the system model from a collection of individual hypothetical scatterers to a collection of weighted, diffuse regions, dTONE is able to achieve much greater robustness to modeling errors. A method for efficient parallelization of dTONE is presented that reduces reconstruction time by more than an order of magnitude with little loss in image fidelity. An alternative reconstruction algorithm, called qTONE, is also developed and is able to reduce reconstruction times by another two orders of magnitude while simultaneously improving image contrast. Each of these methods for improving TONE are presented, their limitations are explored, and all are used in concert to reconstruct in

  14. Reconstruction of images from radiofrequency electron paramagnetic resonance spectra.

    PubMed

    Smith, C M; Stevens, A D

    1994-12-01

    This paper discusses methods for obtaining image reconstructions from electron paramagnetic resonance (EPR) spectra which constitute object projections. An automatic baselining technique is described which treats each spectrum consistently; rotating the non-horizontal baselines which are caused by stray magnetic effects onto the horizontal axis. The convolved backprojection method is described for both two- and three-dimensional reconstruction and the effect of cut-off frequency on the reconstruction is illustrated. A slower, indirect, iterative method, which does a non-linear fit to the projection data, is shown to give a far smoother reconstructed image when the method of maximum entropy is used to determine the value of the final residual sum of squares. Although this requires more computing time than the convolved backprojection method, it is more flexible and overcomes the problem of numerical instability encountered in deconvolution. Images from phantom samples in vitro are discussed. The spectral data for these have been accumulated quickly and have a low signal-to-noise ratio. The results show that as few as 16 spectra can still be processed to give an image. Artifacts in the image due to a small number of projections using the convolved backprojection reconstruction method can be removed by applying a threshold, i.e. only plotting contours higher than a given value. These artifacts are not present in an image which has been reconstructed by the maximum entropy technique. At present these techniques are being applied directly to in vivo studies.

  15. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  16. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  17. A novel building boundary reconstruction method based on lidar data and images

    NASA Astrophysics Data System (ADS)

    Chen, Yiming; Zhang, Wuming; Zhou, Guoqing; Yan, Guangjian

    2013-09-01

    Building boundary is important for the urban mapping and real estate industry applications. The reconstruction of building boundary is also a significant but difficult step in generating city building models. As Light detection and ranging system (Lidar) can acquire large and dense point cloud data fast and easily, it has great advantages for building reconstruction. In this paper, we combine Lidar data and images to develop a novel building boundary reconstruction method. We use only one scan of Lidar data and one image to do the reconstruction. The process consists of a sequence of three steps: project boundary Lidar points to image; extract accurate boundary from image; and reconstruct boundary in Lidar points. We define a relationship between 3D points and the pixel coordinates. Then we extract the boundary in the image and use the relationship to get boundary in the point cloud. The method presented here reduces the difficulty of data acquisition effectively. The theory is not complex so it has low computational complexity. It can also be widely used in the data acquired by other 3D scanning devices to improve the accuracy. Results of the experiment demonstrate that this method has a clear advantage and high efficiency over others, particularly in the data with large point spacing.

  18. Patient-specific minimum-dose imaging protocols for statistical image reconstruction in C-arm cone-beam CT using correlated noise injection

    NASA Astrophysics Data System (ADS)

    Wang, A. S.; Stayman, J. W.; Otake, Y.; Khanna, A. J.; Gallia, G. L.; Siewerdsen, J. H.

    2014-03-01

    Purpose: A new method for accurately portraying the impact of low-dose imaging techniques in C-arm cone-beam CT (CBCT) is presented and validated, allowing identification of minimum-dose protocols suitable to a given imaging task on a patient-specific basis in scenarios that require repeat intraoperative scans. Method: To accurately simulate lower-dose techniques and account for object-dependent noise levels (x-ray quantum noise and detector electronics noise) and correlations (detector blur), noise of the proper magnitude and correlation was injected into the projections from an initial CBCT acquired at the beginning of a procedure. The resulting noisy projections were then reconstructed to yield low-dose preview (LDP) images that accurately depict the image quality at any level of reduced dose in both filtered backprojection and statistical image reconstruction. Validation studies were conducted on a mobile C-arm, with the noise injection method applied to images of an anthropomorphic head phantom and cadaveric torso across a range of lower-dose techniques. Results: Comparison of preview and real CBCT images across a full range of techniques demonstrated accurate noise magnitude (within ~5%) and correlation (matching noise-power spectrum, NPS). Other image quality characteristics (e.g., spatial resolution, contrast, and artifacts associated with beam hardening and scatter) were also realistically presented at all levels of dose and across reconstruction methods, including statistical reconstruction. Conclusion: Generating low-dose preview images for a broad range of protocols gives a useful method to select minimum-dose techniques that accounts for complex factors of imaging task, patient-specific anatomy, and observer preference. The ability to accurately simulate the influence of low-dose acquisition in statistical reconstruction provides an especially valuable means of identifying low-dose limits in a manner that does not rely on a model for the nonlinear

  19. Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.

    2015-03-01

    Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.

  20. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  1. Image oscillation reduction and convergence acceleration for OSEM reconstruction

    SciTech Connect

    Huang, S.C.

    1999-06-01

    The authors have investigated the use of two approaches to reduce the image oscillation of OSEM reconstruction that is due to the inconsistencies among different partial subsets of the projection measurements (sinogram) when considering as a group. One approach pre-processes the sinogram to make it satisfy a sinogram consistency condition. The second approach takes the average of the intermediary images (i.e., smoothes image values over sub-iterations). Both approaches were found to be capable of reducing the image oscillation, and combination of both was most effective. With these approaches, the convergence of OSEM reconstruction is further improved. For computer simulated data and real PET data, a single iteration of these new OSEM reconstruction was shown to yield images comparable to those with 80 EM iterations.

  2. Compensation for air voids in photoacoustic computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Li, Lei; Wang, Lihong V.; Anastasio, Mark A.

    2016-03-01

    Most image reconstruction methods in photoacoustic computed tomography (PACT) assume that the acoustic properties of the object and the surrounding medium are homogeneous. This can lead to strong artifacts in the reconstructed images when there are significant variations in sound speed or density. Air voids represent a particular challenge due to the severity of the differences between the acoustic properties of air and water. In whole-body small animal imaging, the presence of air voids in the lungs, stomach, and gastrointestinal system can limit image quality over large regions of the object. Iterative reconstruction methods based on the photoacoustic wave equation can account for these acoustic variations, leading to improved resolution, improved contrast, and a reduction in the number of imaging artifacts. However, the strong acoustic heterogeneities can lead to instability or errors in the numerical wave solver. Here, the impact of air voids on PACT image reconstruction is investigated, and procedures for their compensation are proposed. The contributions of sound speed and density variations to the numerical stability of the wave solver are considered, and a novel approach for mitigating the impact of air voids while reducing the computational burden of image reconstruction is identified. These results are verified by application to an experimental phantom.

  3. Geoaccurate three-dimensional reconstruction via image-based geometry

    NASA Astrophysics Data System (ADS)

    Walvoord, Derek J.; Rossi, Adam J.; Paul, Bradley D.; Brower, Bernie; Pellechia, Matthew F.

    2013-05-01

    Recent technological advances in computing capabilities and persistent surveillance systems have led to increased focus on new methods of exploiting geospatial data, bridging traditional photogrammetric techniques and state-of-the-art multiple view geometry methodology. The structure from motion (SfM) problem in Computer Vision addresses scene reconstruction from uncalibrated cameras, and several methods exist to remove the inherent projective ambiguity. However, the reconstruction remains in an arbitrary world coordinate frame without knowledge of its relationship to a xed earth-based coordinate system. This work presents a novel approach for obtaining geoaccurate image-based 3-dimensional reconstructions in the absence of ground control points by using a SfM framework and the full physical sensor model of the collection system. Absolute position and orientation information provided by the imaging platform can be used to reconstruct the scene in a xed world coordinate system. Rather than triangulating pixels from multiple image-to-ground functions, each with its own random error, the relative reconstruction is computed via image-based geometry, i.e., geometry derived from image feature correspondences. In other words, the geolocation accuracy is improved using the relative distances provided by the SfM reconstruction. Results from the Exelis Wide-Area Motion Imagery (WAMI) system are provided to discuss conclusions and areas for future work.

  4. Noninvasive Vascular Displacement Estimation for Relative Elastic Modulus Reconstruction in Transversal Imaging Planes

    PubMed Central

    Hansen, Hendrik H.G.; Richards, Michael S.; Doyley, Marvin M.; de Korte, Chris L.

    2013-01-01

    Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF) data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2–3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding. PMID:23478602

  5. Noninvasive vascular displacement estimation for relative elastic modulus reconstruction in transversal imaging planes.

    PubMed

    Hansen, Hendrik H G; Richards, Michael S; Doyley, Marvin M; de Korte, Chris L

    2013-01-01

    Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF) data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2-3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding.

  6. Expectation maximization SPECT reconstruction with a content-adaptive singularity-based mesh-domain image model

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Ye, Hongwei; Xu, Yuesheng; Hu, Xiaofei; Vogelsang, Levon; Shen, Lixin; Feiglin, David; Lipson, Edward; Krol, Andrzej

    2008-03-01

    To improve the speed and quality of ordered-subsets expectation-maximization (OSEM) SPECT reconstruction, we have implemented a content-adaptive, singularity-based, mesh-domain, image model (CASMIM) with an accurate algorithm for estimation of the mesh-domain system matrix. A preliminary image, used to initialize CASMIM reconstruction, was obtained using pixel-domain OSEM. The mesh-domain representation of the image was produced by a 2D wavelet transform followed by Delaunay triangulation to obtain joint estimation of nodal locations and their activity values. A system matrix with attenuation compensation was investigated. Digital chest phantom SPECT was simulated and reconstructed. The quality of images reconstructed with OSEM-CASMIM is comparable to that from pixel-domain OSEM, but images are obtained five times faster by the CASMIM method.

  7. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  8. Radiometrically accurate thermal imaging in the Landsat program

    NASA Astrophysics Data System (ADS)

    Lansing, Jack C., Jr.

    1988-01-01

    Methods of calibrating Landsat TM thermal IR data have been developed so that the residual error is reduced to 0.9 K (1 standard deviation). Methods for verifying the radiometric performance of TM on orbit and ground calibration methods are discussed. The preliminary design of the enhanced TM for Landsat-6 is considered. A technique for accurately reducing raw data from the Landsat-5 thermal band is described in detail.

  9. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  10. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more

  11. Accurate detection of blood vessels improves the detection of exudates in color fundus images.

    PubMed

    Youssef, Doaa; Solouma, Nahed H

    2012-12-01

    Exudates are one of the earliest and most prevalent symptoms of diseases leading to blindness such as diabetic retinopathy and macular degeneration. Certain areas of the retina with such conditions are to be photocoagulated by laser to stop the disease progress and prevent blindness. Outlining these areas is dependent on outlining the lesions and the anatomic structures of the retina. In this paper, we provide a new method for the detection of blood vessels that improves the detection of exudates in fundus photographs. The method starts with an edge detection algorithm which results in a over segmented image. Then the new feature-based algorithm can be used to accurately detect the blood vessels. This algorithm considers the characteristics of a retinal blood vessel such as its width range, intensities and orientations for the purpose of selective segmentation. Because of its bulb shape and its color similarity with exudates, the optic disc can be detected using the common Hough transform technique. The extracted blood vessel tree and optic disc could be subtracted from the over segmented image to get an initial estimate of exudates. The final estimation of exudates can then be obtained by morphological reconstruction based on the appearance of exudates. This method is shown to be promising since it increases the sensitivity and specificity of exudates detection to 80% and 100% respectively.

  12. Probe and object function reconstruction in incoherent stem imaging

    SciTech Connect

    Nellist, P.D.; Pennycook, S.J.

    1996-09-01

    Using the phase-object approximation it is shown how an annular dark- field (ADF) detector in a scanning transmission electron microscope (STEM) leads to an image which can be described by an incoherent model. The point spread function is found to be simply the illuminating probe intensity. An important consequence of this is that there is no phase problem in the imaging process, which allows various image processing methods to be applied directly to the image intensity data. Using an image of a GaAs<110>, the probe intensity profile is reconstructed, confirming the existence of a 1.3 {Angstrom} probe in a 300kV STEM. It is shown that simply deconvolving this reconstructed probe from the image data does not improve its interpretability because the dominant effects of the imaging process arise simply from the restricted resolution of the microscope. However, use of the reconstructed probe in a maximum entropy reconstruction is demonstrated, which allows information beyond the resolution limit to be restored and does allow improved image interpretation.

  13. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  14. Reconstruction of indoor scene from a single image

    NASA Astrophysics Data System (ADS)

    Wu, Di; Li, Hongyu; Zhang, Lin

    2015-03-01

    Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.

  15. Fair-view image reconstruction with dual dictionaries.

    PubMed

    Lu, Yang; Zhao, Jun; Wang, Ge

    2012-01-01

    In this paper, we formulate the problem of computed tomography (CT)under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively.Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV.

  16. Few-view image reconstruction with dual dictionaries

    PubMed Central

    Lu, Yang; Zhao, Jun; Wang, Ge

    2011-01-01

    In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. PMID:22155989

  17. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  18. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  19. Reconstruction of a cone-beam CT image via forward iterative projection matching

    SciTech Connect

    Brock, R. Scott; Docef, Alen; Murphy, Martin J.

    2010-12-15

    Purpose: To demonstrate the feasibility of reconstructing a cone-beam CT (CBCT) image by deformably altering a prior fan-beam CT (FBCT) image such that it matches the anatomy portrayed in the CBCT projection data set. Methods: A prior FBCT image of the patient is assumed to be available as a source image. A CBCT projection data set is obtained and used as a target image set. A parametrized deformation model is applied to the source FBCT image, digitally reconstructed radiographs (DRRs) that emulate the CBCT projection image geometry are calculated and compared to the target CBCT projection data, and the deformation model parameters are adjusted iteratively until the DRRs optimally match the CBCT projection data set. The resulting deformed FBCT image is hypothesized to be an accurate representation of the patient's anatomy imaged by the CBCT system. The process is demonstrated via numerical simulation. A known deformation is applied to a prior FBCT image and used to create a synthetic set of CBCT target projections. The iterative projection matching process is then applied to reconstruct the deformation represented in the synthetic target projections; the reconstructed deformation is then compared to the known deformation. The sensitivity of the process to the number of projections and the DRR/CBCT projection mismatch is explored by systematically adding noise to and perturbing the contrast of the target projections relative to the iterated source DRRs and by reducing the number of projections. Results: When there is no noise or contrast mismatch in the CBCT projection images, a set of 64 projections allows the known deformed CT image to be reconstructed to within a nRMS error of 1% and the known deformation to within a nRMS error of 7%. A CT image nRMS error of less than 4% is maintained at noise levels up to 3% of the mean projection intensity, at which the deformation error is 13%. At 1% noise level, the number of projections can be reduced to 8 while maintaining

  20. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  1. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT.

    PubMed

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-21

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  2. Sparsity-constrained three-dimensional image reconstruction for C-arm angiography.

    PubMed

    Rashed, Essam A; al-Shatouri, Mohammad; Kudo, Hiroyuki

    2015-07-01

    X-ray C-arm is an important imaging tool in interventional radiology, road-mapping and radiation therapy because it provides accurate descriptions of vascular anatomy and therapeutic end point. In common interventional radiology, the C-arm scanner produces a set of two-dimensional (2D) X-ray projection data obtained with a detector by rotating the scanner gantry around the patient. Unlike conventional fluoroscopic imaging, three-dimensional (3D) C-arm computed tomography (CT) provides more accurate cross-sectional images, which are helpful for therapy planning, guidance and evaluation in interventional radiology. However, 3D vascular imaging using the conventional C-arm fluoroscopy encounters some geometry challenges. Inspired by the theory of compressed sensing, we developed an image reconstruction algorithm for conventional angiography C-arm scanners. The main challenge in this image reconstruction problem is the projection data limitations. We consider a small number of views acquired from a short rotation orbit with offset scan geometry. The proposed method, called sparsity-constrained angiography (SCAN), is developed using the alternating direction method of multipliers, and the results obtained from simulated and real data are encouraging. SCAN algorithm provides a framework to generate 3D vascular images using the conventional C-arm scanners in lower cost than conventional 3D imaging scanners.

  3. An accurate test for acute appendicitis: In-111 WBC imaging

    SciTech Connect

    Navarro, D.A.; Weber, P.M.; Kang, I.Y.; dosRemedios, L.V.; Jasko, I.A.

    1985-05-01

    The decision to operate when acute appendicitis (APPY) is suspected is often difficult. Surgeons accept up to a 20% false positive rate to avoid any delay that may result in appendiceal rupture and peritonitis. The authors have successfully improved early diagnostic accuracy by using abdominal imaging beginning 2 hours after injecting In-111 labeled WBC. Patients with clear-cut (APPY) had laparotomy and were not studied. Those who were to be observed in the ER for possible (APPY) had their leukocytes harvested, labeled with In-111 oxine, and reinjected. Abnormal localized activity in the right lower quadrant (RLQ) imaged at 2 hours was graded relative to bone marrow activity (8M): 0, 1+BM. When available the surgical specimen was imaged for In-111 activity. Of 31 patients studied there were 13 with positive scans for (APPY) all surgically confirmed. There were 4 additional abnormal studies all demonstrating known diagnostic patterns, 2 of pertonitis and 2 of colitis. There were 14 negative studies in 8 of whom the clinical course was benign; the remaining 6 had laparotomy with 3 having (APPY) and 3 not. Thus there were no false positives and 3 false negatives. One case negative at 2 hours had appendiceal activity later. The 3 cases with 3+ activity all had apendiceal abscesses. This new application of In-111 oxine WBC imaging is safe, simple, sensitive and specific. It shortens the time to surgical intervention and should reduce the surgical false positive rate.

  4. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  5. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.

  6. Tomographic mesh generation for OSEM reconstruction of SPECT images

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Yu, Bo; Vogelsang, Levon; Krol, Andrzej; Xu, Yuesheng; Hu, Xiaofei; Feiglin, David

    2009-02-01

    To improve quality of OSEM SPECT reconstruction in the mesh domain, we implemented an adaptive mesh generation method that produces tomographic mesh consisting of triangular elements with size and density commensurate with geometric detail of the objects. Node density and element size change smoothly as a function of distance from the edges and edge curvature without creation of 'bad' elements. Tomographic performance of mesh-based OSEM reconstruction is controlled by the tomographic mesh structure, i.e. node density distribution, which in turn is ruled by the number of key points on the boundaries. A greedy algorithm is used to influence the distribution of nodes on the boundaries. The relationship between tomographic mesh properties and OSEM reconstruction quality has been investigated. We conclude that by selecting adequate number of key points, one can produce a tomographic mesh with lowest number of nodes that is sufficient to provide desired quality of reconstructed images, appropriate for the imaging system properties.

  7. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  8. An adaptive filtered back-projection for photoacoustic image reconstruction

    SciTech Connect

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-05-15

    Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing

  9. An adaptive filtered back-projection for photoacoustic image reconstruction

    PubMed Central

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-01-01

    Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing

  10. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  11. Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis

    SciTech Connect

    Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor

    2011-10-15

    Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 {+-} 0.44 mm (Mean {+-} STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 {+-} 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use.

  12. Reconstruction of electrostatic force microscopy images

    NASA Astrophysics Data System (ADS)

    Strassburg, E.; Boag, A.; Rosenwaks, Y.

    2005-08-01

    An efficient algorithm to restore the actual surface potential image from Kelvin probe force microscopy measurements of semiconductors is presented. The three-dimensional potential of the tip-sample system is calculated using an integral equation-based boundary element method combined with modeling the semiconductor by an equivalent dipole-layer and image-charge model. The derived point spread function of the measuring tip is then used to restore the actual surface potential from the measured image, using noise filtration and deconvolution algorithms. The model is then used to restore high-resolution Kelvin probe microscopy images of semiconductor surfaces.

  13. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  14. Calibrating the High Density Magnetic Port within Tissue Expanders to Achieve more Accurate Dose Calculations for Postmastectomy Patients with Immediate Breast Reconstruction

    NASA Astrophysics Data System (ADS)

    Jones, Jasmine; Zhang, Rui; Heins, David; Castle, Katherine

    In postmastectomy radiotherapy, an increasing number of patients have tissue expanders inserted subpectorally when receiving immediate breast reconstruction. These tissue expanders are composed of silicone and are inflated with saline through an internal metallic port; this serves the purpose of stretching the muscle and skin tissue over time, in order to house a permanent implant. The issue with administering radiation therapy in the presence of a tissue expander is that the port's magnetic core can potentially perturb the dose delivered to the Planning Target Volume, causing significant artifacts in CT images. Several studies have explored this problem, and suggest that density corrections must be accounted for in treatment planning. However, very few studies accurately calibrated commercial TP systems for the high density material used in the port, and no studies employed fusion imaging to yield a more accurate contour of the port in treatment planning. We compared depth dose values in the water phantom between measurement and TPS calculations, and we were able to overcome some of the inhomogeneities presented by the image artifact by fusing the KVCT and MVCT images of the tissue expander together, resulting in a more precise comparison of dose calculations at discrete locations. We expect this method to be pivotal in the quantification of dose distribution in the PTV. Research funded by the LS-AMP Award.

  15. Respiratory motion correction in emission tomography image reconstruction.

    PubMed

    Reyes, Mauricio; Malandain, Grégoire; Koulibaly, Pierre Malick; González Ballester, Miguel A; Darcourt, Jacques

    2005-01-01

    In Emission Tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations and imprecise diagnosis. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested with improvements over the spatial activity distribution in lungs lesions, but with the disadvantages of requiring additional instrumentation or discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion correction directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the Maximum Likelihood Expectation Maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data.

  16. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  17. 5D respiratory motion model based image reconstruction algorithm for 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao

    2015-11-01

    4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.

  18. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  19. Ultra-Fast Image Reconstruction of Tomosynthesis Mammography Using GPU

    PubMed Central

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-01-01

    Digital Breast Tomosynthesis (DBT) is a technology that creates three dimensional (3D) images of breast tissue. Tomosynthesis mammography detects lesions that are not detectable with other imaging systems. If image reconstruction time is in the order of seconds, we can use Tomosynthesis systems to perform Tomosynthesis-guided Interventional procedures. This research has been designed to study ultra-fast image reconstruction technique for Tomosynthesis Mammography systems using Graphics Processing Unit (GPU). At first, projections of Tomosynthesis mammography have been simulated. In order to produce Tomosynthesis projections, it has been designed a 3D breast phantom from empirical data. It is based on MRI data in its natural form. Then, projections have been created from 3D breast phantom. The image reconstruction algorithm based on FBP was programmed with C++ language in two methods using central processing unit (CPU) card and the Graphics Processing Unit (GPU). It calculated the time of image reconstruction in two kinds of programming (using CPU and GPU). PMID:26171373

  20. Image reconstruction for hybrid true-color micro-CT.

    PubMed

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2012-06-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid "true-color" micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a "color diffusion" phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose.

  1. Full field spatially-variant image-based resolution modelling reconstruction for the HRRT.

    PubMed

    Angelis, Georgios I; Kotasidis, Fotis A; Matthews, Julian C; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J

    2015-03-01

    Accurate characterisation of the scanner's point spread function across the entire field of view (FOV) is crucial in order to account for spatially dependent factors that degrade the resolution of the reconstructed images. The HRRT users' community resolution modelling reconstruction software includes a shift-invariant resolution kernel, which leads to transaxially non-uniform resolution in the reconstructed images. Unlike previous work to date in this field, this work is the first to model the spatially variant resolution across the entire FOV of the HRRT, which is the highest resolution human brain PET scanner in the world. In this paper we developed a spatially variant image-based resolution modelling reconstruction dedicated to the HRRT, using an experimentally measured shift-variant resolution kernel. Previously, the system response was measured and characterised in detail across the entire FOV of the HRRT, using a printed point source array. The newly developed resolution modelling reconstruction was applied on measured phantom, as well as clinical data and was compared against the HRRT users' community resolution modelling reconstruction, which is currently in use. Results demonstrated improvements both in contrast and resolution recovery, particularly for regions close to the edges of the FOV, with almost uniform resolution recovery across the entire transverse FOV. In addition, because the newly measured resolution kernel is slightly broader with wider tails, compared to the deliberately conservative kernel employed in the HRRT users' community software, the reconstructed images appear to have not only improved contrast recovery (up to 20% for small regions), but also better noise characteristics.

  2. Full field spatially-variant image-based resolution modelling reconstruction for the HRRT.

    PubMed

    Angelis, Georgios I; Kotasidis, Fotis A; Matthews, Julian C; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J

    2015-03-01

    Accurate characterisation of the scanner's point spread function across the entire field of view (FOV) is crucial in order to account for spatially dependent factors that degrade the resolution of the reconstructed images. The HRRT users' community resolution modelling reconstruction software includes a shift-invariant resolution kernel, which leads to transaxially non-uniform resolution in the reconstructed images. Unlike previous work to date in this field, this work is the first to model the spatially variant resolution across the entire FOV of the HRRT, which is the highest resolution human brain PET scanner in the world. In this paper we developed a spatially variant image-based resolution modelling reconstruction dedicated to the HRRT, using an experimentally measured shift-variant resolution kernel. Previously, the system response was measured and characterised in detail across the entire FOV of the HRRT, using a printed point source array. The newly developed resolution modelling reconstruction was applied on measured phantom, as well as clinical data and was compared against the HRRT users' community resolution modelling reconstruction, which is currently in use. Results demonstrated improvements both in contrast and resolution recovery, particularly for regions close to the edges of the FOV, with almost uniform resolution recovery across the entire transverse FOV. In addition, because the newly measured resolution kernel is slightly broader with wider tails, compared to the deliberately conservative kernel employed in the HRRT users' community software, the reconstructed images appear to have not only improved contrast recovery (up to 20% for small regions), but also better noise characteristics. PMID:25596999

  3. Alpha image reconstruction (AIR): A new iterative CT image reconstruction approach using voxel-wise alpha blending

    SciTech Connect

    Hofmann, Christian; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc

    2014-06-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast

  4. Local fingerprint image reconstruction based on gabor filtering

    NASA Astrophysics Data System (ADS)

    Bakhtiari, Somayeh; Agaian, Sos S.; Jamshidi, Mo

    2012-06-01

    In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual information of the image to be preserved while connecting discontinuities correctly by approximating the orientation matrix more genuinely.

  5. A rapid reconstruction algorithm for three-dimensional scanning images

    NASA Astrophysics Data System (ADS)

    Xiang, Jiying; Wu, Zhen; Zhang, Ping; Huang, Dexiu

    1998-04-01

    A `simulated fluorescence' three-dimensional reconstruction algorithm, which is especially suitable for confocal images of partial transparent biological samples, is proposed in this paper. To make the retina projection of the object reappear and to avoid excessive memory consumption, the original image is rotated and compressed before the processing. A left image and a right image are mixed by different colors to increase the sense of stereo. The details originally hidden in deep layers are well exhibited with the aid of an `auxiliary directional source'. In addition, the time consumption is greatly reduced compared with conventional methods such as `ray tracing'. The realization of the algorithm is interpreted by a group of reconstructed images.

  6. Reconstruction of 7T-Like Images From 3T MRI.

    PubMed

    Bahrami, Khosro; Shi, Feng; Zong, Xiaopeng; Shin, Hae Won; An, Hongyu; Shen, Dinggang

    2016-09-01

    In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous

  7. Reconstruction of 7T-Like Images From 3T MRI.

    PubMed

    Bahrami, Khosro; Shi, Feng; Zong, Xiaopeng; Shin, Hae Won; An, Hongyu; Shen, Dinggang

    2016-09-01

    In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous

  8. Improved image quality and computation reduction in 4-D reconstruction of cardiac-gated SPECT images.

    PubMed

    Narayanan, M V; King, M A; Wernick, M N; Byrne, C L; Soares, E J; Pretorius, P H

    2000-05-01

    Spatiotemporal reconstruction of cardiac-gated SPECT images permits us to obtain valuable information related to cardiac function. However, the task of reconstructing this four-dimensional (4-D) data set is computation intensive. Typically, these studies are reconstructed frame-by-frame: a nonoptimal approach because temporal correlations in the signal are not accounted for. In this work, we show that the compression and signal decorrelation properties of the Karhunen-Loève (KL) transform may be used to greatly simplify the spatiotemporal reconstruction problem. The gated projections are first KL transformed in the temporal direction. This results in a sequence of KL-transformed projection images for which the signal components are uncorrelated along the time axis. As a result, the 4-D reconstruction task is simplified to a series of three-dimensional (3-D) reconstructions in the KL domain. The reconstructed KL components are subsequently inverse KL transformed to obtain the entire spatiotemporal reconstruction set. Our simulation and clinical results indicate that KL processing provides image sequences that are less noisy than are conventional frame-by-frame reconstructions. Additionally, by discarding high-order KL components that are dominated by noise, we can achieve savings in computation time because fewer reconstructions are needed in comparison to conventional frame-by-frame reconstructions.

  9. A methodology to event reconstruction from trace images.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre

    2015-03-01

    The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally

  10. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    PubMed Central

    Xu, Shiyu; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-01-01

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair based prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications. PMID:26328987

  11. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  12. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  13. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method.

  14. Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.

    PubMed

    Fromm, S A; Sachse, C

    2016-01-01

    Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. PMID:27572732

  15. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  16. Gadgetron: an open source framework for medical image reconstruction.

    PubMed

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-06-01

    This work presents a new open source framework for medical image reconstruction called the "Gadgetron." The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or "Gadgets" from raw data to reconstructed images. The data processing pipeline is configured dynamically at run-time based on an extensible markup language configuration description. The framework promotes reuse and sharing of reconstruction modules and new Gadgets can be added to the Gadgetron framework through a plugin-like architecture without recompiling the basic framework infrastructure. Gadgets are typically implemented in C/C++, but the framework includes wrapper Gadgets that allow the user to implement new modules in the Python scripting language for rapid prototyping. In addition to the streaming framework infrastructure, the Gadgetron comes with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its application to Cartesian and non-Cartesian parallel magnetic resonance imaging.

  17. Principles of MR image formation and reconstruction.

    PubMed

    Duerk, J L

    1999-11-01

    This article describes a number of concepts that provide insights into the process of MR imaging. The use of shaped, fixed-bandwidth RF pulses and magnetic field gradients is described to provide an understanding of the methods used for slice selection. Variations in the slice-excitation profile are shown as a function of the RF pulse shape used, the truncation method used, and the tip angle. It should be remembered that although the goal is to obtain uniform excitation across the slice, this goal is never achieved in practice, thus necessitating the use of slice gaps in some cases. Excitation, refocusing, and inversion pulses are described. Excitation pulses nutate the spins from the longitudinal axis into the transverse plane, where their magnetization can be detected. Refocusing pulses are used to flip the magnetization through 180 degrees once it is in the transverse plane, so that the influence of magnetic field inhomogeneities is eliminated. Inversion pulses are used to flip the magnetization from the +z to the -z direction in invesrsion-recovery sequences. Radiofrequency pulses can also be used to eliminate either fat or water protons from the images because of the small differences in resonant frequency between these two types of protons. Selective methods based on chemical shift and binomial methods are described. Once the desired magnetization has been tipped into the transverse plane by the slice-selection process, two imaging axes remain to be spatially encoded. One axis is easily encoded by the application of a second magnetic field gradient that establishes a one-to-one mapping between position and frequency during the time that the signal is converted from analog to digital sampling. This frequency-encoding gradient is used in combination with the Fourier transform to determine the location of the precessing magnetization. The second image axis is encoded by a process known as phase encoding. The collected data can be described as the 2D Fourier

  18. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  19. Optimisation techniques for digital image reconstruction from their projections

    NASA Astrophysics Data System (ADS)

    Durrani, T. S.; Goutis, C. E.

    1980-09-01

    A method is proposed for the digital reconstruction of images from their projections based on optimizing specified performance criteria. The reconstruction problem is embedded into the framework of constrained optimization and its solution is shown to lead to a relationship between the image and the one-dimensional Lagrange functions associated with each cost criterion. Two types of geometries (the parallel-beam and fan-beam systems) are considered for the acquisition of projection data and the constrained-optimization problem is solved for both. The ensuing algorithms allow the reconstruction of multidimensional objects from one-dimensional functions only. For digital data a fast reconstruction algorithm is proposed which exploits the symmetries inherent in both a circular domain of image reconstruction and in projections obtained at equispaced angles. Computational complexity is significantly reduced by the use of fast-Fourier-transform techniques, as the underlying relationship between the available projection data and the associated Lagrange multipliers is shown to possess a block circulant matrix structure.

  20. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  1. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  2. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  3. Whole Mouse Brain Image Reconstruction from Serial Coronal Sections Using FIJI (ImageJ).

    PubMed

    Paletzki, Ronald; Gerfen, Charles R

    2015-10-01

    Whole-brain reconstruction of the mouse enables comprehensive analysis of the distribution of neurochemical markers, the distribution of anterogradely labeled axonal projections or retrogradely labeled neurons projecting to a specific brain site, or the distribution of neurons displaying activity-related markers in behavioral paradigms. This unit describes a method to produce whole-brain reconstruction image sets from coronal brain sections with up to four fluorescent markers using the freely available image-processing program FIJI (ImageJ).

  4. Double diffraction of quasiperiodic structures and Bayesian image reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Jian

    2006-04-01

    We study the spectrum of quasiperiodic structures by using quasiperiodic pulse trains. We find a single sharp diffraction peak when the dynamics of the incident wave matches the arrangement of the scatterers, that is, when the pulse train and the scatterers are in resonance. The maximum diffraction angle and the resonant pulse train determine the positions of the scatterers. These results may provide a methodology for identifying quasicrystals with a very large signal-to-noise ratio. We propose a double diffraction scheme to identify one-dimensional quasiperiodic structures with high precision. The scheme uses a set of scatterers to produce a sequence of quasiperiodic pulses from a single pulse, and then uses these pulses to determine the structure of the second set of scatterers. We find the maximum allowable number of target scatterers, given an experimental setup. Our calculation confirms our simulation results. The reverse problem of spectroscopy is reconstruction that is, given an experimental image, how to reconstruct the original as faithfully as possible. We study the general image reconstruction problem under the Bayesian inference framework. We designed a modified multiplicity prior distribution, and use Gibbs sampling to reconstruct the latent image. In contrast with the traditional entropy prior, our modified multiplicity prior avoids the Sterling's formula approximation, incorporates an Occam's razor, and automatically adapts for the information content in the noisy input. We argue that the mean posterior image is a better representation than the maximum a posterior (MAP) image. We also optimize the Gibbs sampling algorithm to determine the high-dimensional posterior density distribution with high efficiency. Our algorithm runs N2 faster than traditional Gibbs sampler. With the knowledge of the full posterior distribution, statistical measures such as standard error and confident interval can be easily generated. Our algorithm is not only useful for

  5. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    NASA Astrophysics Data System (ADS)

    Ortuño, J. E.; Kontaxakis, G.; Rubio, J. L.; Guerra, P.; Santos, A.

    2010-04-01

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  6. Cascaded diffractive optical elements for improved multiplane image reconstruction.

    PubMed

    Gülses, A Alkan; Jenkins, B Keith

    2013-05-20

    Computer-generated phase-only diffractive optical elements in a cascaded setup are designed by one deterministic and one stochastic algorithm for multiplane image formation. It is hypothesized that increasing the number of elements as wavefront modulators in the longitudinal dimension would enlarge the available solution space, thus enabling enhanced image reconstruction. Numerical results show that increasing the number of holograms improves quality at the output. Design principles, computational methods, and specific conditions are discussed.

  7. Advances in imaging technologies for planning breast reconstruction

    PubMed Central

    Mohan, Anita T.

    2016-01-01

    The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790

  8. Advances in imaging technologies for planning breast reconstruction.

    PubMed

    Mohan, Anita T; Saint-Cyr, Michel

    2016-04-01

    The role and choice of preoperative imaging for planning in breast reconstruction is still a disputed topic in the reconstructive community, with varying opinion on the necessity, the ideal imaging modality, costs and impact on patient outcomes. Since the advent of perforator flaps their use in microsurgical breast reconstruction has grown. Perforator based flaps afford lower donor morbidity by sparing the underlying muscle provide durable results, superior cosmesis to create a natural looking new breast, and are preferred in the context of radiation therapy. However these surgeries are complex; more technically challenging that implant based reconstruction, and leaves little room for error. The role of imaging in breast reconstruction can assist the surgeon in exploring or confirming flap choices based on donor site characteristics and presence of suitable perforators. Vascular anatomical studies in the lab have provided the surgeon a foundation of knowledge on location and vascular territories of individual perforators to improve our understanding for flap design and safe flap harvest. The creation of a presurgical map in patients can highlight any abnormal or individual anatomical variance to optimize flap design, intraoperative decision-making and execution of flap harvest with greater predictability and efficiency. This article highlights the role and techniques for preoperative planning using the newer technologies that have been adopted in reconstructive clinical practice: computed tomographic angiography (CTA), magnetic resonance angiography (MRA), laser-assisted indocyanine green fluorescence angiography (LA-ICGFA) and dynamic infrared thermography (DIRT). The primary focus of this paper is on the application of CTA and MRA imaging modalities. PMID:27047790

  9. RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES

    EPA Science Inventory


    Reconstruction of Human Lung Morphology Models from Magnetic Resonance Images
    T. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)

  10. Calibrationless Parallel Imaging Reconstruction Based on Structured Low-Rank Matrix Completion

    PubMed Central

    Shin, Peter J.; Larson, Peder E.Z.; Ohliger, Michael A.; Elad, Michael; Pauly, John M.; Vigneron, Daniel B.; Lustig, Michael

    2013-01-01

    Purpose A calibrationless parallel imaging reconstruction method, termed simultaneous auto-calibrating and k-space estimation (SAKE), is presented. It is a data-driven, coil-by-coil reconstruction method that does not require a separate calibration step for estimating coil sensitivity information. Methods In SAKE, an under-sampled multi-channel dataset is structured into a single data matrix. Then the reconstruction is formulated as a structured low-rank matrix completion problem. An iterative solution that implements a projection-onto-sets algorithm with singular value thresholding is described. Results Reconstruction results are demonstrated for retrospectively and prospectively under-sampled, multi-channel Cartesian data having no calibration signals. Additionally, non-Cartesian data reconstruction is presented. Finally, improved image quality is demonstrated by combining SAKE with wavelet-based compressed sensing. Conclusion As estimation of coil sensitivity information is not needed, the proposed method could potentially benefit MR applications where acquiring accurate calibration data is limiting or not possible at all. PMID:24248734

  11. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  12. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  13. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance.

  14. Super-resolution image reconstruction for ultrasonic nondestructive evaluation.

    PubMed

    Li, Shanglei; Chu, Tsuchin Philip

    2013-12-01

    Ultrasonic testing is one of the most successful nondestructive evaluation (NDE) techniques for the inspection of carbon-fiber-reinforced polymer (CFRP) materials. This paper discusses the application of the iterative backprojection (IBP) super-resolution image reconstruction technique to carbon epoxy laminates with simulated defects to obtain high-resolution images for NDE. Super-resolution image reconstruction is an approach used to overcome the inherent resolution limitations of an existing ultrasonic system. It can greatly improve the image quality and allow more detailed inspection of the region of interest (ROI) with high resolution, improving defect evaluation and accuracy. First, three artificially simulated delamination defects in a CFRP panel were considered to evaluate and validate the application of the IBP method. The results of the validation indicate that both the contrast-tonoise ratio (CNR) and the peak signal-to-noise ratio (PSNR) value of the super-resolution result are better than the bicubic interpolation method. Then, the IBP method was applied to the low-resolution ultrasonic C-scan image sequence with subpixel displacement of two types of defects (delamination and porosity) which were obtained by the micro-scanning imaging technique. The result demonstrated that super-resolution images achieved better visual quality with an improved image resolution compared with raw C-scan images.

  15. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  16. Pattern density function for reconstruction of three-dimensional porous media from a single two-dimensional image

    NASA Astrophysics Data System (ADS)

    Gao, Mingliang; Teng, Qizhi; He, Xiaohai; Zuo, Chen; Li, ZhengJi

    2016-01-01

    Three-dimensional (3D) structures are useful for studying the spatial structures and physical properties of porous media. A 3D structure can be reconstructed from a single two-dimensional (2D) training image (TI) by using mathematical modeling methods. Among many reconstruction algorithms, an optimal-based algorithm was developed and has strong stability. However, this type of algorithm generally uses an autocorrelation function (which is unable to accurately describe the morphological features of porous media) as its objective function. This has negatively affected further research on porous media. To accurately reconstruct 3D porous media, a pattern density function is proposed in this paper, which is based on a random variable employed to characterize image patterns. In addition, the paper proposes an original optimal-based algorithm called the pattern density function simulation; this algorithm uses a pattern density function as its objective function, and adopts a multiple-grid system. Meanwhile, to address the key point of algorithm reconstruction speed, we propose the use of neighborhood statistics, the adjacent grid and reversed phase method, and a simplified temperature-controlled mechanism. The pattern density function is a high-order statistical function; thus, when all grids in the reconstruction results converge in the objective functions, the morphological features and statistical properties of the reconstruction results will be consistent with those of the TI. The experiments include 2D reconstruction using one artificial structure, and 3D reconstruction using battery materials and cores. Hierarchical simulated annealing and single normal equation simulation are employed as the comparison algorithms. The autocorrelation function, linear path function, and pore network model are used as the quantitative measures. Comprehensive tests show that 3D porous media can be reconstructed accurately from a single 2D training image by using the method proposed

  17. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  18. High resolution image reconstruction from projection of low resolution images differing in subpixel shifts

    NASA Astrophysics Data System (ADS)

    Mareboyana, Manohar; Le Moigne, Jacqueline; Bennett, Jerome

    2016-05-01

    In this paper, we demonstrate simple algorithms that project low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithms are very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. are used in projection. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML) algorithms. The algorithms are robust and are not overly sensitive to the registration inaccuracies.

  19. Statistical reconstruction algorithms for continuous wave electron spin resonance imaging

    NASA Astrophysics Data System (ADS)

    Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon

    2013-06-01

    Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.

  20. Investigation of limited-view image reconstruction in optoacoustic tomography employing a priori structural information

    NASA Astrophysics Data System (ADS)

    Huang, Chao; Oraevsky, Alexander A.; Anastasio, Mark A.

    2010-08-01

    Optoacoustic tomography (OAT) is an emerging ultrasound-mediated biophotonic imaging modality that has exciting potential for many biomedical imaging applications. There is great interest in conducting B-mode ultrasound and OAT imaging studies for breast cancer detection using a common transducer. In this situation, the range of tomographic view angles is limited, which can result in distortions in the reconstructed OAT image if conventional reconstruction algorithms are applied to limited-view measurement data. In this work, we investigate an image reconstruction method that utilizes information regarding target boundaries to improve the quality of the reconstructed OAT images. This is accomplished by developing boundary-constrained image reconstruction algorithm for OAT based on Bayesian image reconstruction theory. The computer-simulation studies demonstrate that the Bayesian approach can effectively reduce the artifact and noise levels and preserve the edges in reconstructed limited-view OAT images as compared to those produced by a conventional OAT reconstruction algorithm.

  1. Toward more accurate ancestral protein genotype-phenotype reconstructions with the use of species tree-aware gene trees.

    PubMed

    Groussin, Mathieu; Hobbs, Joanne K; Szöllősi, Gergely J; Gribaldo, Simonetta; Arcus, Vickery L; Gouy, Manolo

    2015-01-01

    The resurrection of ancestral proteins provides direct insight into how natural selection has shaped proteins found in nature. By tracing substitutions along a gene phylogeny, ancestral proteins can be reconstructed in silico and subsequently synthesized in vitro. This elegant strategy reveals the complex mechanisms responsible for the evolution of protein functions and structures. However, to date, all protein resurrection studies have used simplistic approaches for ancestral sequence reconstruction (ASR), including the assumption that a single sequence alignment alone is sufficient to accurately reconstruct the history of the gene family. The impact of such shortcuts on conclusions about ancestral functions has not been investigated. Here, we show with simulations that utilizing information on species history using a model that accounts for the duplication, horizontal transfer, and loss (DTL) of genes statistically increases ASR accuracy. This underscores the importance of the tree topology in the inference of putative ancestors. We validate our in silico predictions using in vitro resurrection of the LeuB enzyme for the ancestor of the Firmicutes, a major and ancient bacterial phylum. With this particular protein, our experimental results demonstrate that information on the species phylogeny results in a biochemically more realistic and kinetically more stable ancestral protein. Additional resurrection experiments with different proteins are necessary to statistically quantify the impact of using species tree-aware gene trees on ancestral protein phenotypes. Nonetheless, our results suggest the need for incorporating both sequence and DTL information in future studies of protein resurrections to accurately define the genotype-phenotype space in which proteins diversify.

  2. Toward more accurate ancestral protein genotype-phenotype reconstructions with the use of species tree-aware gene trees.

    PubMed

    Groussin, Mathieu; Hobbs, Joanne K; Szöllősi, Gergely J; Gribaldo, Simonetta; Arcus, Vickery L; Gouy, Manolo

    2015-01-01

    The resurrection of ancestral proteins provides direct insight into how natural selection has shaped proteins found in nature. By tracing substitutions along a gene phylogeny, ancestral proteins can be reconstructed in silico and subsequently synthesized in vitro. This elegant strategy reveals the complex mechanisms responsible for the evolution of protein functions and structures. However, to date, all protein resurrection studies have used simplistic approaches for ancestral sequence reconstruction (ASR), including the assumption that a single sequence alignment alone is sufficient to accurately reconstruct the history of the gene family. The impact of such shortcuts on conclusions about ancestral functions has not been investigated. Here, we show with simulations that utilizing information on species history using a model that accounts for the duplication, horizontal transfer, and loss (DTL) of genes statistically increases ASR accuracy. This underscores the importance of the tree topology in the inference of putative ancestors. We validate our in silico predictions using in vitro resurrection of the LeuB enzyme for the ancestor of the Firmicutes, a major and ancient bacterial phylum. With this particular protein, our experimental results demonstrate that information on the species phylogeny results in a biochemically more realistic and kinetically more stable ancestral protein. Additional resurrection experiments with different proteins are necessary to statistically quantify the impact of using species tree-aware gene trees on ancestral protein phenotypes. Nonetheless, our results suggest the need for incorporating both sequence and DTL information in future studies of protein resurrections to accurately define the genotype-phenotype space in which proteins diversify. PMID:25371435

  3. Highly undersampled magnetic resonance image reconstruction via homotopic l(0) -minimization.

    PubMed

    Trzasko, Joshua; Manduca, Armando

    2009-01-01

    In clinical magnetic resonance imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in compressive sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex l(1) -minimization problem. Although l(1)-based techniques are extremely powerful, they inherently require a degree of over-sampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the CS paradigm based on homotopic approximation of the l(0) quasi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard CS methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented.

  4. Experimental study on the 3D image reconstruction in a truncated Archimedean-like spiral geometry with a long-rectangular detector and its image characteristics

    NASA Astrophysics Data System (ADS)

    Hong, Daeki; Cho, Heemoon; Cho, Hyosung; Choi, Sungil; Je, Uikyu; Park, Yeonok; Park, Chulkyu; Lim, Hyunwoo; Park, Soyoung; Woo, Taeho

    2015-11-01

    In this work, we performed a feasibility study on the three-dimensional (3D) image reconstruction in a truncated Archimedean-like spiral geometry with a long-rectangular detector for application to high-accurate, cost-effective dental x-ray imaging. Here an x-ray tube and a detector rotate together around the rotational axis several times and, concurrently, the detector moves horizontally in the detector coordinate at a constant speed to cover the whole imaging volume during the projection data acquisition. We established a table-top setup which mainly consists of an x-ray tube (60 kVp, 5 mA), a narrow CMOS-type detector (198-μm pixel resolution, 184 (W)×1176 (H) pixel dimension), and a rotational stage for sample mounting and performed a systematic experiment to demonstrate the viability of the proposed approach to volumetric dental imaging. For the image reconstruction, we employed a compressed-sensing (CS)-based algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate reconstruction. We successfully reconstructed 3D images of considerably high quality and investigated the image characteristics in terms of the image value profile, the contrast-to-noise ratio (CNR), and the spatial resolution.

  5. Colored three-dimensional reconstruction of vehicular thermal infrared images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi

    2015-06-01

    Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.

  6. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    PubMed

    Li, Li; Xiao, Wei; Jian, Weijian

    2014-11-20

    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  7. Atmospheric isoplanatism and astronomical image reconstruction on Mauna Kea

    SciTech Connect

    Cowie, L.L.; Songaila, A.

    1988-07-01

    Atmospheric isoplanatism for visual wavelength image-reconstruction applications was measured on Mauna Kea in Hawaii. For most nights the correlation of the transform functions is substantially wider than the long-exposure transform function at separations up to 30 arcsec. Theoretical analysis shows that this is reasonable if the mean Fried parameter is approximately 30 cm at 5500 A. Reconstructed image quality may be described by a Gaussian with a FWHM of lambda/s/sub 0/. Under average conditions, s/sub 0/ (30 arcsec) exceeds 55 cm at 7000 A. The results show that visual image quality in the 0.1--0.2 arcsec range is obtainable over much of the sky with large ground-based telescopes on this site.

  8. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE PAGES

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    2015-09-09

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l1 regularization terms. The Split Bregman Algorithm provides a fast explicit solutionmore » for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  9. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    SciTech Connect

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    2015-09-09

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l1 regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.

  10. D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2015-12-01

    The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.

  11. MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities

    DOE PAGES

    Kang, Dongwan D.; Froula, Jeff; Egan, Rob; Wang, Zhong

    2015-01-01

    Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less

  12. MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities

    SciTech Connect

    Kang, Dongwan D.; Froula, Jeff; Egan, Rob; Wang, Zhong

    2015-01-01

    Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically forms hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.

  13. Reconstruction of pulse noisy images via stochastic resonance.

    PubMed

    Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan

    2015-01-01

    We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911

  14. Reconstruction of pulse noisy images via stochastic resonance

    NASA Astrophysics Data System (ADS)

    Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan

    2015-06-01

    We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications.

  15. The SRT reconstruction algorithm for semiquantification in PET imaging

    SciTech Connect

    Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  16. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which

  17. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    PubMed Central

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanner. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present an LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3-D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the nonnegative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which

  18. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  19. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  20. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    SciTech Connect

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong; Kim, Insoo; Han, Bumsoo

    2015-11-15

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the

  1. Accuracy of quantitative reconstructions in SPECT/CT imaging

    NASA Astrophysics Data System (ADS)

    Shcherbinin, S.; Celler, A.; Belhocine, T.; van der Werf, R.; Driedger, A.

    2008-09-01

    The goal of this study was to determine the quantitative accuracy of our OSEM-APDI reconstruction method based on SPECT/CT imaging for Tc-99m, In-111, I-123, and I-131 isotopes. Phantom studies were performed on a SPECT/low-dose multislice CT system (Infinia-Hawkeye-4 slice, GE Healthcare) using clinical acquisition protocols. Two radioactive sources were centrally and peripherally placed inside an anthropometric Thorax phantom filled with non-radioactive water. Corrections for attenuation, scatter, collimator blurring and collimator septal penetration were applied and their contribution to the overall accuracy of the reconstruction was evaluated. Reconstruction with the most comprehensive set of corrections resulted in activity estimation with error levels of 3-5% for all the isotopes.

  2. Tomographic image reconstruction and rendering with texture-mapping hardware

    SciTech Connect

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.

  3. Tomographic image reconstruction and rendering with texture-mapping hardware

    NASA Astrophysics Data System (ADS)

    Azevedo, Stephen G.; Cabral, Brian K.; Foran, Jim

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to a graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture- mapping hardware, such as that on the silicon Graphics Reality Engine, shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in our case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. Our technique can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.

  4. Complications of anterior cruciate ligament reconstruction: MR imaging.

    PubMed

    Papakonstantinou, Olympia; Chung, Christine B; Chanchairujira, Kullanuch; Resnick, Donald L

    2003-05-01

    Arthroscopic reconstruction of the anterior cruciate ligament (ACL) using autografts or allografts is being performed with increasing frequency, particularly in young athletes. Although the procedure is generally well tolerated, with good success rates, early and late complications have been documented. As clinical manifestations of graft complications are often non-specific and plain radiographs cannot directly visualize the graft and the adjacent soft tissues, MR imaging has a definite role in the diagnosis of complications after ACL reconstruction and may direct subsequent therapeutic management. Our purpose is to review the normal MR imaging of the ACL graft and present the MR imaging findings of a wide spectrum of complications after ACL reconstruction, such as graft impingement, graft rupture, cystic degeneration of the graft, postoperative infection of the knee, diffuse and localized (i.e., cyclops lesion) arthrofibrosis, and associated donor site abnormalities. Awareness of the MR imaging findings of complications as well as the normal appearances of the normal ACL graft is essential for correct interpretation.

  5. Complications of anterior cruciate ligament reconstruction: MR imaging.

    PubMed

    Papakonstantinou, Olympia; Chung, Christine B; Chanchairujira, Kullanuch; Resnick, Donald L

    2003-05-01

    Arthroscopic reconstruction of the anterior cruciate ligament (ACL) using autografts or allografts is being performed with increasing frequency, particularly in young athletes. Although the procedure is generally well tolerated, with good success rates, early and late complications have been documented. As clinical manifestations of graft complications are often non-specific and plain radiographs cannot directly visualize the graft and the adjacent soft tissues, MR imaging has a definite role in the diagnosis of complications after ACL reconstruction and may direct subsequent therapeutic management. Our purpose is to review the normal MR imaging of the ACL graft and present the MR imaging findings of a wide spectrum of complications after ACL reconstruction, such as graft impingement, graft rupture, cystic degeneration of the graft, postoperative infection of the knee, diffuse and localized (i.e., cyclops lesion) arthrofibrosis, and associated donor site abnormalities. Awareness of the MR imaging findings of complications as well as the normal appearances of the normal ACL graft is essential for correct interpretation. PMID:12695835

  6. Performance validation of phase diversity image reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Hirzberger, J.; Feller, A.; Riethmüller, T. L.; Gandorfer, A.; Solanki, S. K.

    2011-05-01

    We present a performance study of a phase diversity (PD) image reconstruction algorithm based on artificial solar images obtained from MHD simulations and on seeing-free data obtained with the SuFI instrument on the Sunrise balloon borne observatory. The artificial data were altered by applying different levels of degradation with synthesised wavefront errors and noise. The PD algorithm was modified by changing the number of fitted polynomials, the shape of the pupil and the applied noise filter. The obtained reconstructions are evaluated by means of the resulting rms intensity contrast and by the conspicuousness of appearing artifacts. The results show that PD is a robust method which consistently recovers the initial unaffected image contents. The efficiency of the reconstruction is, however, strongly dependent on the number of used fitting polynomials and the noise level of the images. If the maximum number of fitted polynomials is higher than 21, artifacts have to be accepted and for noise levels higher than 10-3 the commonly used noise filtering techniques are not able to avoid amplification of spurious structures.

  7. Missing data reconstruction using Gaussian mixture models for fingerprint images

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary

    2016-05-01

    Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.

  8. Edge-Preserving PET Image Reconstruction Using Trust Optimization Transfer

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization. The most commonly used quadratic penalty often over-smoothes sharp edges and fine features in reconstructed images, while non-quadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or non-smooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the non-smooth ℓ1 regularization. PMID:25438302

  9. Parallel expectation-maximization algorithms for PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-Min

    1999-10-01

    Image reconstruction using Positron Emission Tomography (PET) involves estimating an unknown number of photon pairs emitted from the radiopharmaceuticals within the tissues of the patient's body. The generation of the photons can be described as a Poisson process, and the difficulty of image reconstruction involves approximating the parameter of the tissue density distribution function. A significant amount of artifactual noise exists in the reconstructed image with the convolution back projection method. Using the Maximum Likelihood (ML) formulation, a better estimate can be made for the unknown image information. Despite the better quality of images, the Expectation Maximization (EM) iterative algorithm is not being used in practice due to the tremendous processing time. This research proposes new techniques in designing parallel algorithms in order to speed the reconstruction process. Using the EM algorithm as an example, several general parallel techniques were studied for both distributed-memory architecture and message-passing programming paradigm. Both intra- and inter-iteration latency-hiding schemes were designed to effectively reduce the communication time. Dependencies that exist in and between iterations were rearranged by overlap communication and computation with MPI's non-blocking collective reduction operation. A performance model was established to estimate the processing time of the algorithms and was found to agree with the experimental results. A second strategy, the sparse matrix compaction technique, was developed to reduce the computational time of the computation-bound EM algorithm with better use of PET system geometry. The proposed techniques are generally applicable to many scientific computation problems that involve sparse matrix operations as well as iterative types, of algorithms.

  10. A dual oxygenation and fluorescence imaging platform for reconstructive surgery

    NASA Astrophysics Data System (ADS)

    Ashitate, Yoshitomo; Nguyen, John N.; Venugopal, Vivek; Stockdale, Alan; Neacsu, Florin; Kettenring, Frank; Lee, Bernard T.; Frangioni, John V.; Gioux, Sylvain

    2013-03-01

    There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively, leading to a large number of failures, patient morbidity, and increased healthcare costs. Because near-infrared (NIR) optical imaging is safe, noncontact, inexpensive, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. These capabilities are well illustrated through the clinical translation of fluorescence imaging during oncologic surgery. In this work, we introduce a novel imaging platform that combines two complementary NIR optical modalities: oxygenation imaging and fluorescence imaging. We validated this platform during facial reconstructive surgery on large animals approaching the size of humans. We demonstrate that NIR fluorescence imaging provides identification of perforator arteries, assesses arterial perfusion, and can detect thrombosis, while oxygenation imaging permits the passive monitoring of tissue vital status, as well as the detection and origin of vascular compromise simultaneously. Together, the two methods provide a comprehensive approach to identifying problems and intervening in real time during surgery before irreparable damage occurs. Taken together, this novel platform provides fully integrated and clinically friendly endogenous and exogenous NIR optical imaging for improved image-guided intervention during surgery.

  11. Characterization of a C-arm-mounted XRII for 3D image reconstruction during interventional neuroradiology

    NASA Astrophysics Data System (ADS)

    Fahrig, Rebecca; Fox, Allan J.; Holdsworth, David W.

    1996-04-01

    Treatment of subarachnoid aneurysms with endovascular techniques (e.g. placement of Guglielmi coils) is currently limited by the inability to visualize the neck of the aneurysm after the initial coils have been placed. Coils projecting into parent vessels may cause thrombosis, while incomplete filling leads to regrowth of the aneurysm. Since the procedure is performed using a gantry-mounted x-ray image intensifier (XRII), we have used this system to obtain 2- dimensional (2-D) images over approximately 200 degrees and reconstructed a 3-D image of the embolizing coil, residual aneurysm, and the parent vessel. The required data can be acquired, reconstructed and presented to the neuroradiologist during the interventional procedure. We have characterized an existing clinical C-arm radiographic system, and have developed correction procedures which provide a consistent and adequate data set for standard CT reconstruction using convolution-backprojection techniques. The angle of acquisition for each image is recorded using a hub-mounted angle encoder accurate to within plus or minus 0.3 degrees. We have characterized the XRII distortion over the rotation, and can correct images acquired at any known angle to within plus or minus 0.06 pixels. We have also measured the motion of the center of rotation (reproducible to within plus or minus 0.13 pixels) and correct for the displacement using an image shift and interpolation algorithm. Finally, we have investigated the effects on CT reconstruction of variable dilutions of contrast agent during the cardiac cycle, and have shown the contrast-to-noise ratio (CNR), in the absence of photon noise, to be 28. This is larger than the CNR which we achieve due to photon noise alone.

  12. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    PubMed

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.

  13. An efficient simultaneous reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Atkinson, Callum; Soria, Julio

    2009-10-01

    To date, Tomo-PIV has involved the use of the multiplicative algebraic reconstruction technique (MART), where the intensity of each 3D voxel is iteratively corrected to satisfy one recorded projection, or pixel intensity, at a time. This results in reconstruction times of multiple hours for each velocity field and requires considerable computer memory in order to store the associated weighting coefficients and intensity values for each point in the volume. In this paper, a rapid and less memory intensive reconstruction algorithm is presented based on a multiplicative line-of-sight (MLOS) estimation that determines possible particle locations in the volume, followed by simultaneous iterative correction. Reconstructions of simulated images are presented for two simultaneous algorithms (SART and SMART) as well as the now standard MART algorithm, which indicate that the same accuracy as MART can be achieved 5.5 times faster or 77 times faster with 15 times less memory if the processing and storage of the weighting matrix is considered. Application of MLOS-SMART and MART to a turbulent boundary layer at Re θ = 2200 using a 4 camera Tomo-PIV system with a volume of 1,000 × 1,000 × 160 voxels is discussed. Results indicate improvements in reconstruction speed of 15 times that of MART with precalculated weighting matrix, or 65 times if calculation of the weighting matrix is considered. Furthermore the memory needed to store a large weighting matrix and volume intensity is reduced by almost 40 times in this case.

  14. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    SciTech Connect

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  15. Research on image matching method of big data image of three-dimensional reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong

    2015-12-01

    Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.

  16. Reconstruction of hyperspectral image using matting model for classification

    NASA Astrophysics Data System (ADS)

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  17. Limited Angle Reconstruction Method for Reconstructing Terrestrial Plasmaspheric Densities from EUV Images

    NASA Technical Reports Server (NTRS)

    Newman, Timothy; Santhanam, Naveen; Zhang, Huijuan; Gallagher, Dennis

    2003-01-01

    A new method for reconstructing the global 3D distribution of plasma densities in the plasmasphere from a limited number of 2D views is presented. The method is aimed at using data from the Extreme Ultra Violet (EUV) sensor on NASA s Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Physical properties of the plasmasphere are exploited by the method to reduce the level of inaccuracy imposed by the limited number of views. The utility of the method is demonstrated on synthetic data.

  18. A comparison of manual neuronal reconstruction from biocytin histology or 2-photon imaging: morphometry and computer modeling

    PubMed Central

    Blackman, Arne V.; Grabuschnig, Stefan; Legenstein, Robert; Sjöström, P. Jesper

    2014-01-01

    Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH). An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI) stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP) forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP), however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate due to the occasional poor outcome of histological processing. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use

  19. Improving Three-Dimensional (3D) Range Gated Reconstruction Through Time-of-Flight (TOF) Imaging Analysis

    NASA Astrophysics Data System (ADS)

    Chua, S. Y.; Wang, X.; Guo, N.; Tan, C. S.; Chai, T. Y.; Seet, G. L.

    2016-04-01

    This paper performs an experimental investigation on the TOF imaging profile which strongly influences the quality of reconstruction to accomplish accurate range sensing. From our analysis, the reflected intensity profile recorded appears to deviate from Gaussian model which is commonly assumed and can be perceived as a mixture of noises and actual reflected signal. Noise-weighted Average range calculation is therefore proposed to alleviate noise influence based on the signal detection threshold and system noises. From our experimental result, this alternative range solution demonstrates better accuracy as compared to the conventional weighted average method and proven as a para-axial correction to improve range reconstruction in 3D gated imaging system.

  20. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided. PMID:21934823

  1. SU-E-J-23: An Accurate Algorithm to Match Imperfectly Matched Images for Lung Tumor Detection Without Markers

    SciTech Connect

    Rozario, T; Bereg, S; Chiu, T; Liu, H; Kearney, V; Jiang, L; Mao, W

    2014-06-01

    Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix of small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.

  2. Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark

    2013-04-01

    Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.

  3. Boundary conditions in photoacoustic tomography and image reconstruction.

    PubMed

    Wang, Lihong V; Yang, Xinmai

    2007-01-01

    Recently, the field of photoacoustic tomography has experienced considerable growth. Although several commercially available pure optical imaging modalities, including confocal microscopy, two-photon microscopy, and optical coherence tomography, have been highly successful, none of these technologies can penetrate beyond approximately 1 mm into scattering biological tissues because all of them are based on ballistic and quasiballistic photons. Consequently, heretofore there has been a void in high-resolution optical imaging beyond this depth limit. Photoacoustic tomography has filled this void by combining high ultrasonic resolution and strong optical contrast in a single modality. However, it has been assumed in reconstruction of photoacoustic tomography until now that ultrasound propagates in a boundary-free infinite medium. We present the boundary conditions that must be considered in certain imaging configurations; the associated inverse solutions for image reconstruction are provided and validated by numerical simulation and experiment. Partial planar, cylindrical, and spherical detection configurations with a planar boundary are covered, where the boundary can be either hard or soft. Analogously to the method of images of sources, which is commonly used in forward problems, the ultrasonic detectors are imaged about the boundary to satisfy the boundary condition in the inverse problem. PMID:17343502

  4. POCSENSE: POCS-based reconstruction for sensitivity encoded magnetic resonance imaging.

    PubMed

    Samsonov, Alexei A; Kholmovski, Eugene G; Parker, Dennis L; Johnson, Chris R

    2004-12-01

    A novel method for iterative reconstruction of images from undersampled MRI data acquired by multiple receiver coil systems is presented. Based on Projection onto Convex Sets (POCS) formalism, the method for SENSitivity Encoded data reconstruction (POCSENSE) can be readily modified to include various linear and nonlinear reconstruction constraints. Such constraints may be beneficial for reconstructing highly and overcritically undersampled data sets to improve image quality. POCSENSE is conceptually simple and numerically efficient and can reconstruct images from data sampled on arbitrary k-space trajectories. The applicability of POCSENSE for image reconstruction with nonlinear constraining was demonstrated using a wide range of simulated and real MRI data.

  5. Image reconstruction and optimization using a terahertz scanned imaging system

    NASA Astrophysics Data System (ADS)

    Yıldırım, İhsan Ozan; Özkan, Vedat A.; Idikut, Fırat; Takan, Taylan; Şahin, Asaf B.; Altan, Hakan

    2014-10-01

    Due to the limited number of array detection architectures in the millimeter wave to terahertz region of the electromagnetic spectrum, imaging schemes with scan architectures are typically employed. In these configurations the interplay between the frequencies used to illuminate the scene and the optics used play an important role in the quality of the formed image. Using a multiplied Schottky-diode based terahertz transceiver operating at 340 GHz, in a stand-off detection scheme; the effect of image quality of a metal target was assessed based on the scanning speed of the galvanometer mirrors as well as the optical system that was constructed. Background effects such as leakage on the receiver were minimized by conditioning the signal at the output of the transceiver. Then, the image of the target was simulated based on known parameters of the optical system and the measured images were compared to the simulation. By using an image quality index based on χ2 algorithm the simulated and measured images were found to be in good agreement with a value of χ2 = 0 .14. The measurements as shown here will aid in the future development of larger stand-off imaging systems that work in the terahertz frequency range.

  6. Model-based microwave image reconstruction: simulations and experiments

    SciTech Connect

    Ciocan, Razvan; Jiang Huabei

    2004-12-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data.

  7. PET image reconstruction with anatomical edge guided level set prior

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2011-11-01

    Acquiring both anatomical and functional images during one scan, PET/CT systems improve the ability to detect and localize abnormal uptakes. In addition, CT images provide anatomical boundary information that can be used to regularize positron emission tomography (PET) images. Here we propose a new approach to maximum a posteriori reconstruction of PET images with a level set prior guided by anatomical edges. The image prior models both the smoothness of PET images and the similarity between functional boundaries in PET and anatomical boundaries in CT. Level set functions (LSFs) are used to represent smooth and closed functional boundaries. The proposed method does not assume an exact match between PET and CT boundaries. Instead, it encourages similarity between the two boundaries, while allowing different region definition in PET images to accommodate possible signal and position mismatch between functional and anatomical images. While the functional boundaries are guaranteed to be closed by the LSFs, the proposed method does not require closed anatomical boundaries and can utilize incomplete edges obtained from an automatic edge detection algorithm. We conducted computer simulations to evaluate the performance of the proposed method. Two digital phantoms were constructed based on the Digimouse data and a human CT image, respectively. Anatomical edges were extracted automatically from the CT images. Tumors were simulated in the PET phantoms with different mismatched anatomical boundaries. Compared with existing methods, the new method achieved better bias-variance performance. The proposed method was also applied to real mouse data and achieved higher contrast than other methods.

  8. Three-dimensional image reconstruction in object space

    SciTech Connect

    Kinahan, P.E.; Rogers, J.G.; Harrop, R.; Johnson, R.R.

    1988-02-01

    An analytic three-dimensional image reconstruction algorithm which can utilize the cross-plane gamma rays detected by a wide solid-angle PET system is presented. Unlike current analytic algorithms it does not use Fourier transform methods, although mathematical equivalence to Fourier transform methods is proven. Results of implementing the algorithm are briefly discussed. An extension of the algorithm to utilize all measured cross-plane gamma rays is discussed.

  9. LIRA: Low-counts Image Reconstruction and Analysis

    NASA Astrophysics Data System (ADS)

    Connors, Alanna; Kashyap, Vinay; Siemiginowska, Aneta; van Dyk, David; Stein, Nathan M.

    2016-01-01

    LIRA (Low-counts Image Reconstruction and Analysis) deconvolves any unknown sky components, provides a fully Poisson 'goodness-of-fit' for any best-fit model, and quantifies uncertainties on the existence and shape of unknown sky. It does this without resorting to χ2 or rebinning, which can lose high-resolution information. It is written in R and requires the FITSio package.

  10. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-11-15

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  11. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    PubMed Central

    Sidky, Emil Y.; Pan, Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness whenp=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging. PMID:19994501

  12. Segmentation of tooth in CT images for the 3D reconstruction of teeth

    NASA Astrophysics Data System (ADS)

    Heo, Hoon; Chae, Ok-Sam

    2004-05-01

    In the dental field, the 3D tooth model in which each tooth can be manipulated individually is an essential component for the simulation of orthodontic surgery and treatment. To reconstruct such a tooth model from CT slices, we need to define the accurate boundary of each tooth from CT slices. However, the global threshold method, which is commonly used in most existing 3D reconstruction systems, is not effective for the tooth segmentation in the CT image. In tooth CT slices, some teeth touch with other teeth and some are located inside of alveolar bone whose intensity is similar to that of teeth. In this paper, we propose an image segmentation algorithm based on B-spline curve fitting to produce smooth tooth regions from such CT slices. The proposed algorithm prevents the malfitting problem of the B-spline algorithm by providing accurate initial tooth boundary for the fitting process. This paper proposes an optimal threshold scheme using the intensity and shape information passed by previous slice for the initial boundary generation and an efficient B-spline fitting method based on genetic algorithm. The test result shows that the proposed method detects contour of the individual tooth successfully and can produce a smooth and accurate 3D tooth model for the simulation of orthodontic surgery and treatment.

  13. Active illumination based 3D surface reconstruction and registration for image guided medialization laryngoplasty

    NASA Astrophysics Data System (ADS)

    Jin, Ge; Lee, Sang-Joon; Hahn, James K.; Bielamowicz, Steven; Mittal, Rajat; Walsh, Raymond

    2007-03-01

    The medialization laryngoplasty is a surgical procedure to improve the voice function of the patient with vocal fold paresis and paralysis. An image guided system for the medialization laryngoplasty will help the surgeons to accurately place the implant and thus reduce the failure rates of the surgery. One of the fundamental challenges in image guided system is to accurately register the preoperative radiological data to the intraoperative anatomical structure of the patient. In this paper, we present a combined surface and fiducial based registration method to register the preoperative 3D CT data to the intraoperative surface of larynx. To accurately model the exposed surface area, a structured light based stereo vision technique is used for the surface reconstruction. We combined the gray code pattern and multi-line shifting to generate the intraoperative surface of the larynx. To register the point clouds from the intraoperative stage to the preoperative 3D CT data, a shape priori based ICP method is proposed to quickly register the two surfaces. The proposed approach is capable of tracking the fiducial markers and reconstructing the surface of larynx with no damage to the anatomical structure. We used off-the-shelf digital cameras, LCD projector and rapid 3D prototyper to develop our experimental system. The final RMS error in the registration is less than 1mm.

  14. Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging.

    PubMed

    Jung, Jae-Hyun; Hong, Keehoon; Park, Gilbae; Chung, Indeok; Park, Jae-Hyeung; Lee, Byoungho

    2010-12-01

    We proposed a reconstruction method for the occluded region of three-dimensional (3D) object using the depth extraction based on the optical flow and triangular mesh reconstruction in integral imaging. The depth information of sub-images from the acquired elemental image set is extracted using the optical flow with sub-pixel accuracy, which alleviates the depth quantization problem. The extracted depth maps of sub-image array are segmented by the depth threshold from the histogram based segmentation, which is represented as the point clouds. The point clouds are projected to the viewpoint of center sub-image and reconstructed by the triangular mesh reconstruction. The experimental results support the validity of the proposed method with high accuracy of peak signal-to-noise ratio and normalized cross-correlation in 3D image recognition.

  15. Impact of measurement precision and noise on superresolution image reconstruction.

    PubMed

    Wood, Sally L; Lee, Shu-Ting; Yang, Gao; Christensen, Marc P; Rajan, Dinesh

    2008-04-01

    The performance of uniform and nonuniform detector arrays for application to the PANOPTES (processing arrays of Nyquist-limited observations to produce a thin electro-optic sensor) flat camera design is analyzed for measurement noise environments including quantization noise and Gaussian and Poisson processes. Image data acquired from a commercial camera with 8 bit and 14 bit output options are analyzed, and estimated noise levels are computed. Noise variances estimated from the measurement values are used in the optimal linear estimators for superresolution image reconstruction.

  16. Image stitching and image reconstruction of intestines captured using radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau

    2012-05-01

    This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.

  17. Ultrafast image reconstruction of a dual-head PET system by use of CUDA architecture

    NASA Astrophysics Data System (ADS)

    Hung, YuKai; Dong, Yun; Chern, Felix R.; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu; Chou, Cheng-Ying

    2011-03-01

    Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. For small-animal PET imaging, it is of major interest to improve the sensitivity and resolution. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads. The highly accurate system response matrix can be computed by use of Monte Carlo simulations, and stored for iterative reconstruction methods. The detector head employs 2.1x2.1x20 mm3 LSO/LYSO crystals of pitch size equal to 2.4 mm, and thus will produce more than 224 millions lines of response (LORs). By exploiting the symmetry property in the dual-head system, the computational demands can be dramatically reduced. Nevertheless, the tremendously large system size and repetitive reading of system response matrix from the hard drive will result in extremely long reconstruction times. The implementation of an ordered subset expectation maximization (OSEM) algorithm on a CPU system (four Athlon x64 2.0 GHz PCs) took about 2 days for 1 iteration. Consequently, it is imperative to significantly accelerate the reconstruction process to make it more useful for practical applications. Specifically, the graphic processing unit (GPU), which possesses highly parallel computational architecture of computing units can be exploited to achieve a substantial speedup. In this work, we employed the state-of-art GPU, NVIDIA Tesla C2050 based on the Fermi-generation of the compute united device architecture (CUDA) architecture, to yield a reconstruction process within a few minutes. We demonstrated that reconstruction times can be drastically reduced by using the GPU. The OSEM reconstruction algorithms were implemented employing both GPU-based and CPU-based codes, and their computational performance was quantitatively analyzed and compared.

  18. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  19. Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology

    NASA Astrophysics Data System (ADS)

    Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan

    2016-05-01

    This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.

  20. Iterative Self-Dual Reconstruction on Radar Image Recovery

    SciTech Connect

    Martins, Charles; Medeiros, Fatima; Ushizima, Daniela; Bezerra, Francisco; Marques, Regis; Mascarenhas, Nelson

    2010-05-21

    Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizes when applied to simulated and real SAR images in comparison with standard filters.

  1. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  2. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  3. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  4. Reconstructing chromosphere concentration images directly by continuous-wave diffuse optical tomography.

    PubMed

    Li, Ang; Zhang, Quan; Culver, Joseph P; Miller, Eric L; Boas, David A

    2004-02-01

    We present an algorithm to reconstruct chromosphere concentration images directly rather than following the traditional two-step process of reconstructing wavelength-dependent absorption coefficient images and then calculating chromosphere concentration images. This procedure imposes prior spectral information into the image reconstruction that results in a dramatic improvement in the image contrast-to-noise ratio of better than 100%. We demonstrate this improvement with simulations and a dynamic blood phantom experiment. PMID:14759043

  5. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  6. Highly accurate SNR measurement using the covariance of two SEM images with the identical view.

    PubMed

    Oho, Eisaku; Suzuki, Kazuhiko

    2012-01-01

    Quality of an SEM image is strongly influenced by the extent of noise. As a well-known method in the field of SEM, the covariance is applied to measure the signal-to-noise ratio (SNR). This method has potential ability for highly accurate measurement of the SNR, which is hardly known until now. If the precautions discussed in this article are adopted, that method can demonstrate its real ability. These precautions are strongly related to "proper acquisition of two images with the identical view," "alignment of an aperture diaphragm," "reduction of charging phenomena," "elimination of particular noises," and "accurate focusing," As necessary, characteristics in SEM signal and noise are investigated from a few standpoints. When using the maximum performance of this measurement, SNR of many SEM images obtained in a variety of the SEM operating conditions and specimens can be measured accurately.

  7. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    SciTech Connect

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  8. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  9. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  10. Using Copula Distributions to Support More Accurate Imaging-Based Diagnostic Classifiers for Neuropsychiatric Disorders

    PubMed Central

    Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.

    2014-01-01

    Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging

  11. Immobilization Using Dental Material Casts Facilitates Accurate Serial and Multimodality Small Animal Imaging

    PubMed Central

    Haney, Chad R.; Fan, Xiaobing; Parasca, Adrian D.; Karczmar, Gregory S.; Halpern, Howard J.; Pelizzari, Charles A.

    2010-01-01

    Custom disposable patient immobilization systems that conform to the patient’s body contours are commonly used to facilitate accurate repeated patient setup for imaging and treatment in radiation therapy. However, in small-animal imaging, immobilization is often overlooked or done in a way that is not conducive to reproducible positioning. This has a negative impact on the potential for accurate analysis of serial or multimodality imaging. We present the use of vinyl polysiloxane dental impression material for immobilization of mice for imaging. Four different materials were examined to identify any potential artifacts using magnetic resonance techniques. A water phantom placed inside the cast was used at 4.7 T with magnetic resonance imaging and showed no effect at the center of the image when compared with images without the cast. A negligible effect was seen near the ends of the coil. Each material had no detectable signal using electron paramagnetic resonance imaging at 9 mT. The use of dental material also greatly enhances the use of fiducial markers that can be embedded in the mold. Therefore, image registration is simplified as the immobilization of the animal and fiducials together helps in translating from one image coordinate system to another. PMID:20827425

  12. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  13. Task-driven image acquisition and reconstruction in cone-beam CT.

    PubMed

    Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H

    2015-04-21

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt

  14. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and

  15. Fast two-dimensional super-resolution image reconstruction algorithm for ultra-high emitter density.

    PubMed

    Huang, Jiaqing; Gumpper, Kristyn; Chi, Yuejie; Sun, Mingzhai; Ma, Jianjie

    2015-07-01

    Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters.

  16. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  17. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  18. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.

    2015-03-01

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  19. Fast and efficient fully 3D PET image reconstruction using sparse system matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-10-01

    Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.

  20. Superresolution image reconstruction using panchromatic and multispectral image fusion

    NASA Astrophysics Data System (ADS)

    Elbakary, M. I.; Alam, M. S.

    2008-08-01

    Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that is not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images using co-registered high special-resolution imagery such as panchromatic imagery. In this paper, we propose a new algorithm to enhance the spatial resolution of low resolution hyperspectral bands using strongly correlated and co-registered high special-resolution panchromatic imagery. The proposed algorithm constructs the superresolution bands corresponding to the low resolution bands to enhance the resolution using a global correlation enhancement technique. The global enhancement is based on the least square regression and the histogram matching to improve the estimated interpolation of the spatial resolution. The introduced algorithm is considered as an improvement for Price’s algorithm which uses the global correlation only for the spatial resolution enhancement. Numerous studies are conducted to investigate the effect of the proposed algorithm for achieving the enhancement compared to the traditional algorithm for superresolution enhancement. Experiments results obtained using hyperspectral data derived from airborne imaging sensor are presented to verify the superiority of the proposed algorithm.

  1. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  2. Application of DIRI dynamic infrared imaging in reconstructive surgery

    NASA Astrophysics Data System (ADS)

    Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier

    2006-04-01

    We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.

  3. Simultaneous reconstruction and segmentation for dynamic SPECT imaging

    NASA Astrophysics Data System (ADS)

    Burger, Martin; Rossmanith, Carolin; Zhang, Xiaoqun

    2016-10-01

    This work deals with the reconstruction of dynamic images that incorporate characteristic dynamics in certain subregions, as arising for the kinetics of many tracers in emission tomography (SPECT, PET). We make use of a basis function approach for the unknown tracer concentration by assuming that the region of interest can be divided into subregions with spatially constant concentration curves. Applying a regularised variational framework reminiscent of the Chan-Vese model for image segmentation we simultaneously reconstruct both the labelling functions of the subregions as well as the subconcentrations within each region. Our particular focus is on applications in SPECT with the Poisson noise model, resulting in a Kullback-Leibler data fidelity in the variational approach. We present a detailed analysis of the proposed variational model and prove existence of minimisers as well as error estimates. The latter apply to a more general class of problems and generalise existing results in literature since we deal with a nonlinear forward operator and a nonquadratic data fidelity. A computational algorithm based on alternating minimisation and splitting techniques is developed for the solution of the problem and tested on appropriately designed synthetic data sets. For those we compare the results to those of standard EM reconstructions and investigate the effects of Poisson noise in the data.

  4. Constrain static target kinetic iterative image reconstruction for 4D cardiac CT imaging

    NASA Astrophysics Data System (ADS)

    Alessio, Adam M.; La Riviere, Patrick J.

    2011-03-01

    Iterative image reconstruction offers improved signal to noise properties for CT imaging. A primary challenge with iterative methods is the substantial computation time. This computation time is even more prohibitive in 4D imaging applications, such as cardiac gated or dynamic acquisition sequences. In this work, we propose only updating the time-varying elements of a 4D image sequence while constraining the static elements to be fixed or slowly varying in time. We test the method with simulations of 4D acquisitions based on measured cardiac patient data from a) a retrospective cardiac-gated CT acquisition and b) a dynamic perfusion CT acquisition. We target the kinetic elements with one of two methods: 1) position a circular ROI on the heart, assuming area outside ROI is essentially static throughout imaging time; and 2) select varying elements from the coefficient of variation image formed from fast analytic reconstruction of all time frames. Targeted kinetic elements are updated with each iteration, while static elements remain fixed at initial image values formed from the reconstruction of data from all time frames. Results confirm that the computation time is proportional to the number of targeted elements; our simulations suggest that <30% of elements need to be updated in each frame leading to >3 times reductions in reconstruction time. The images reconstructed with the proposed method have matched mean square error with full 4D reconstruction. The proposed method is amenable to most optimization algorithms and offers the potential for significant computation improvements, which could be traded off for more sophisticated system models or penalty terms.

  5. Spectral-overlap approach to multiframe superresolution image reconstruction.

    PubMed

    Cohen, Edward; Picard, Richard H; Crabtree, Peter N

    2016-05-20

    Various techniques and algorithms have been developed to improve the resolution of sensor-aliased imagery captured with multiple subpixel-displaced frames on an undersampled pixelated image plane. These dealiasing algorithms are typically known as multiframe superresolution (SR), or geometric SR to emphasize the role of the focal-plane array. Multiple low-resolution (LR) aliased frames of the same scene are captured and allocated to a common high-resolution (HR) reconstruction grid, leading to the possibility of an alias-free reconstruction, as long as the HR sampling rate is above the Nyquist rate. Allocating LR-frame irradiances to HR frames requires the use of appropriate weights. Here we present a novel approach in the spectral domain to calculating exactly weights based on spatial overlap areas, which we call the spectral-overlap (SO) method. We emphasize that the SO method is not a spectral approach but rather an approach to calculating spatial weights that uses spectral decompositions to exploit the array properties of the HR and LR pixels. The method is capable of dealing with arbitrary aliasing factors and interframe motions consisting of in-plane translations and rotations. We calculate example reconstructed HR images (the inverse problem) from synthetic aliased images for integer and for fractional aliasing factors. We show the utility of the SO-generated overlap-area weights in both noniterative and iterative reconstructions with known or unknown aliasing factor. We show how the overlap weights can be used to generate the Green's function (pixel response function) for noniterative dealiasing. In addition, we show how the overlap-area weights can be used to generate synthetic aliased images (the forward problem). We compare the SO approach to the spatial-domain geometric approach of O'Rourke and find virtually identical high accuracy but with significant enhancements in speed for SO. We also compare the SO weights to interpolated weights and find that

  6. Improved proton computed tomography by dual modality image reconstruction

    SciTech Connect

    Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  7. Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution.

    PubMed

    Fernandez, Romain; Das, Pradeep; Mirabet, Vincent; Moscardi, Eric; Traas, Jan; Verdeil, Jean-Luc; Malandain, Grégoire; Godin, Christophe

    2010-07-01

    Quantitative information on growing organs is required to better understand morphogenesis in both plants and animals. However, detailed analyses of growth patterns at cellular resolution have remained elusive. We developed an approach, multiangle image acquisition, three-dimensional reconstruction and cell segmentation-automated lineage tracking (MARS-ALT), in which we imaged whole organs from multiple angles, computationally merged and segmented these images to provide accurate cell identification in three dimensions and automatically tracked cell lineages through multiple rounds of cell division during development. Using these methods, we quantitatively analyzed Arabidopsis thaliana flower development at cell resolution, which revealed differential growth patterns of key regions during early stages of floral morphogenesis. Lastly, using rice roots, we demonstrated that this approach is both generic and scalable.

  8. Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution.

    PubMed

    Fernandez, Romain; Das, Pradeep; Mirabet, Vincent; Moscardi, Eric; Traas, Jan; Verdeil, Jean-Luc; Malandain, Grégoire; Godin, Christophe

    2010-07-01

    Quantitative information on growing organs is required to better understand morphogenesis in both plants and animals. However, detailed analyses of growth patterns at cellular resolution have remained elusive. We developed an approach, multiangle image acquisition, three-dimensional reconstruction and cell segmentation-automated lineage tracking (MARS-ALT), in which we imaged whole organs from multiple angles, computationally merged and segmented these images to provide accurate cell identification in three dimensions and automatically tracked cell lineages through multiple rounds of cell division during development. Using these methods, we quantitatively analyzed Arabidopsis thaliana flower development at cell resolution, which revealed differential growth patterns of key regions during early stages of floral morphogenesis. Lastly, using rice roots, we demonstrated that this approach is both generic and scalable. PMID:20543845

  9. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging

    SciTech Connect

    Lauzier, Pascal Theriault; Tang Jie; Speidel, Michael A.; Chen Guanghong

    2012-07-15

    Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise and streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution

  10. A method for 3D reconstruction of coronary arteries using biplane angiography and intravascular ultrasound images.

    PubMed

    Bourantas, Christos V; Kourtis, Iraklis C; Plissiti, Marina E; Fotiadis, Dimitrios I; Katsouras, Christos S; Papafaklis, Michail I; Michalis, Lampros K

    2005-12-01

    The aim of this study is to describe a new method for the three-dimensional reconstruction of coronary arteries and its quantitative validation. Our approach is based on the fusion of the data provided by intravascular ultrasound images (IVUS) and biplane angiographies. A specific segmentation algorithm is used for the detection of the regions of interest in intravascular ultrasound images. A new methodology is also introduced for the accurate extraction of the catheter path. In detail, a cubic B-spline is used for approximating the catheter path in each biplane projection. Each B-spline curve is swept along the normal direction of its X-ray angiographic plane forming a surface. The intersection of the two surfaces is a 3D curve, which represents the reconstructed path. The detected regions of interest in the IVUS images are placed perpendicularly onto the path and their relative axial twist is computed using the sequential triangulation algorithm. Then, an efficient algorithm is applied to estimate the absolute orientation of the first IVUS frame. In order to obtain 3D visualization the commercial package Geomagic Studio 4.0 is used. The performance of the proposed method is assessed using a validation methodology which addresses the separate validation of each step followed for obtaining the coronary reconstruction. The performance of the segmentation algorithm was examined in 80 IVUS images. The reliability of the path extraction method was studied in vitro using a metal wire model and in vivo in a dataset of 11 patients. The performance of the sequential triangulation algorithm was tested in two gutter models and in the coronary arteries (marked with metal clips) of six cadaveric sheep hearts. Finally, the accuracy in the estimation of the first IVUS frame absolute orientation was examined in the same set of cadaveric sheep hearts. The obtained results demonstrate that the proposed reconstruction method is reliable and capable of depicting the morphology of

  11. Image reconstruction for the ClearPET™ Neuro

    NASA Astrophysics Data System (ADS)

    Weber, Simone; Morel, Christian; Simon, Luc; Krieguer, Magalie; Rey, Martin; Gundlich, Brigitte; Khodaverdi, Maryam

    2006-12-01

    ClearPET™ is a family of small-animal PET scanners which are currently under development within the Crystal Clear Collaboration (CERN). All scanners are based on the same detector block design using individual LSO and LuYAP crystals in phoswich configuration, coupled to multi-anode photomultiplier tubes. One of the scanners, the ClearPET™ Neuro is designed for applications in neuroscience. Four detector blocks with 64 2×2×10 mm LSO and LuYAP crystals, arranged in line, build a module. Twenty modules are arranged in a ring with a ring diameter of 13.8 cm and an axial size of 11.2 cm. An insensitive region at the border of the detector heads results in gaps between the detectors axially and tangentially. The detectors are rotating by 360° in step and shoot mode during data acquisition. Every second module is shifted axially to compensate partly for the gaps between the detector blocks in a module. This unconventional scanner geometry requires dedicated image reconstruction procedures. Data acquisition acquires single events that are stored with a time mark in a dedicated list mode format. Coincidences are associated off line by software. After sorting the data into 3D sinograms, image reconstruction is performed using the Ordered Subset Maximum A Posteriori One-Step Late (OSMAPOSL) iterative algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Due to the non-conventional scanner design, careful estimation of the sensitivity matrix is needed to obtain artifact-free images from the ClearPET™ Neuro.

  12. Modeling and image reconstruction in spectrally resolved bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Dehghani, Hamid; Pogue, Brian W.; Davis, Scott C.; Patterson, Michael S.

    2007-02-01

    Recent interest in modeling and reconstruction algorithms for Bioluminescence Tomography (BLT) has increased and led to the general consensus that non-spectrally resolved intensity-based BLT results in a non-unique problem. However, the light emitted from, for example firefly Luciferase, is widely distributed over the band of wavelengths from 500 nm to 650 nm and above, with the dominant fraction emitted from tissue being above 550 nm. This paper demonstrates the development of an algorithm used for multi-wavelength 3D spectrally resolved BLT image reconstruction in a mouse model. It is shown that using a single view data, bioluminescence sources of up to 15 mm deep can be successfully recovered given correct information about the underlying tissue absorption and scatter.

  13. Automatic speed of sound correction with photoacoustic image reconstruction

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Cao, Meng; Feng, Ting; Yuan, Jie; Cheng, Qian; Liu, XIaojun; Xu, Guan; Wang, Xueding

    2016-03-01

    Sound velocity measurement is of great importance to the application of biomedical especially in the research of acoustic detection and acoustic tomography. Using correct sound velocities in each medium other than one unified sound propagation speed, we can effectively enhance sound based imaging resolution. Photoacoustic tomography (PAT), is defined as cross-sectional or three-dimensional (3D) imaging of a material based on the photoacoustic effect and it is a developing, non-invasive imaging method in biomedical research. This contribution proposes a method to concurrently calculate multiple acoustic speeds in different mediums. Firstly, we get the size of infra-structure of the target by B-mode ultrasonic imaging method. Then we build the photoacoustic (PA) image of the same target with different acoustic speed in different medium. By repeatedly evaluate the quality of reconstruct PA image, we dynamically calibrate the acoustic speeds in different medium to build a finest PA image. Thus, we take these speeds of sound as the correct acoustic propagation velocities in according mediums. Experiments show that our non-invasive method can yield correct speed of sound with less than 0.3% error which might benefit future research in biomedical science.

  14. General Structure of Regularization Procedures in Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Titterington, D. M.

    1985-03-01

    Regularization procedures are portrayed as compromises between the conflicting aims of fidelity with the observed image and perfect smoothness. The selection of an estimated image involves the choice of a prescription, indicating the manner of smoothing, and of a smoothing parameter, which defines the degree of smoothing. Prescriptions of the minimum-penalized- distance type are considered and are shown to be equivalent to maximum-penalized-smoothness prescriptions. These include, therefore, constrained least-squares and constrained maximum entropy methods. The formal link with Bayesian statistical analysis is pointed out. Two important methods of choosing the degree of smoothing are described, one based on criteria of consistency with the data and one based on minimizing a risk function. The latter includes minimum mean-squared error criteria. Although the maximum entropy method has some practical advantages, there seems no case for it to hold a special place on philosophical grounds, in the context of image reconstruction.

  15. Spectral image reconstruction by a tunable LED illumination

    NASA Astrophysics Data System (ADS)

    Lin, Meng-Chieh; Tsai, Chen-Wei; Tien, Chung-Hao

    2013-09-01

    Spectral reflectance estimation of an object via low-dimensional snapshot requires both image acquisition and a post numerical estimation analysis. In this study, we set up a system incorporating a homemade cluster of LEDs with spectral modulation for scene illumination, and a multi-channel CCD to acquire multichannel images by means of fully digital process. Principal component analysis (PCA) and pseudo inverse transformation were used to reconstruct the spectral reflectance in a constrained training set, such as Munsell and Macbeth Color Checker. The average reflectance spectral RMS error from 34 patches of a standard color checker were 0.234. The purpose is to investigate the use of system in conjunction with the imaging analysis for industry or medical inspection in a fast and acceptable accuracy, where the approach was preliminary validated.

  16. A generalized Fourier penalty in prior-image-based reconstruction for cross-platform imaging

    NASA Astrophysics Data System (ADS)

    Pourmorteza, A.; Siewerdsen, J. H.; Stayman, J. W.

    2016-03-01

    Sequential CT studies present an excellent opportunity to apply prior-image-based reconstruction (PIBR) methods that leverage high-fidelity prior imaging studies to improve image quality and/or reduce x-ray exposure in subsequent studies. One major obstacle in using PIBR is that the initial and subsequent studies are often performed on different scanners (e.g. diagnostic CT followed by CBCT for interventional guidance); this results in mismatch in attenuation values due to hardware and software differences. While improved artifact correction techniques can potentially mitigate such differences, the correction is often incomplete. Here, we present an alternate strategy where the PIBR itself is used to mitigate these differences. We define a new penalty for the previously introduced PIBR called Reconstruction of Difference (RoD). RoD differs from many other PIBRs in that it reconstructs only changes in the anatomy (vs. reconstructing the current anatomy). Direct regularization of the difference image in RoD provides an opportunity to selectively penalize spatial frequencies of the difference image (e.g. low frequency differences associated with attenuation offsets and shading artifacts) without interfering with the variations in unchanged background image. We leverage this flexibility and introduce a novel regularization strategy using a generalized Fourier penalty within the RoD framework and develop the modified reconstruction algorithm. We evaluate the performance of the new approach in both simulation studies and in physical CBCT test-bench data. We find that generalized Fourier penalty can be highly effective in reducing low-frequency x-ray artifacts through selective suppression of spatial frequencies in the reconstructed difference image.

  17. Statistics-based reconstruction method with high random-error tolerance for integral imaging.

    PubMed

    Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing

    2015-10-01

    A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.

  18. Application of adaptive kinetic modelling for bias propagation reduction in direct 4D image reconstruction

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Matthews, J. C.; Reader, A. J.; Angelis, G. I.; Zaidi, H.

    2014-10-01

    Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [15O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating

  19. Medical Image Watermarking Technique for Accurate Tamper Detection in ROI and Exact Recovery of ROI.

    PubMed

    Eswaraiah, R; Sreenivasa Reddy, E

    2014-01-01

    In telemedicine while transferring medical images tampers may be introduced. Before making any diagnostic decisions, the integrity of region of interest (ROI) of the received medical image must be verified to avoid misdiagnosis. In this paper, we propose a novel fragile block based medical image watermarking technique to avoid embedding distortion inside ROI, verify integrity of ROI, detect accurately the tampered blocks inside ROI, and recover the original ROI with zero loss. In this proposed method, the medical image is segmented into three sets of pixels: ROI pixels, region of noninterest (RONI) pixels, and border pixels. Then, authentication data and information of ROI are embedded in border pixels. Recovery data of ROI is embedded into RONI. Results of experiments conducted on a number of medical images reveal that the proposed method produces high quality watermarked medical images, identifies the presence of tampers inside ROI with 100% accuracy, and recovers the original ROI without any loss.

  20. Medical Image Watermarking Technique for Accurate Tamper Detection in ROI and Exact Recovery of ROI

    PubMed Central

    Eswaraiah, R.; Sreenivasa Reddy, E.

    2014-01-01

    In telemedicine while transferring medical images tampers may be introduced. Before making any diagnostic decisions, the integrity of region of interest (ROI) of the received medical image must be verified to avoid misdiagnosis. In this paper, we propose a novel fragile block based medical image watermarking technique to avoid embedding distortion inside ROI, verify integrity of ROI, detect accurately the tampered blocks inside ROI, and recover the original ROI with zero loss. In this proposed method, the medical image is segmented into three sets of pixels: ROI pixels, region of noninterest (RONI) pixels, and border pixels. Then, authentication data and information of ROI are embedded in border pixels. Recovery data of ROI is embedded into RONI. Results of experiments conducted on a number of medical images reveal that the proposed method produces high quality watermarked medical images, identifies the presence of tampers inside ROI with 100% accuracy, and recovers the original ROI without any loss. PMID:25328515

  1. LIRA: Low-Count Image Reconstruction and Analysis

    NASA Astrophysics Data System (ADS)

    Stein, Nathan; van Dyk, David; Connors, Alanna; Siemiginowska, Aneta; Kashyap, Vinay

    2009-09-01

    LIRA is a new software package for the R statistical computing language. The package is designed for multi-scale non-parametric image analysis for use in high-energy astrophysics. The code implements an MCMC sampler that simultaneously fits the image and the necessary tuning/smoothing parameters in the model (an advance from `EMC2' of Esch et al. 2004). The model-based approach allows for quantification of the standard error of the fitted image and can be used to access the statistical significance of features in the image or to evaluate the goodness-of-fit of a proposed model. The method does not rely on Gaussian approximations, instead modeling image counts as Poisson data, making it suitable for images with extremely low counts. LIRA can include a null (or background) model and fit the departure between the observed data and the null model via a wavelet-like multi-scale component. The technique is therefore suited for problems in which some aspect of an observation is well understood (e.g, a point source), but questions remain about observed departures. To quantitatively test for the presence of diffuse structure unaccounted for by a point source null model, first, the observed image is fit with the null model. Second, multiple simulated images, generated as Poisson realizations of the point source model, are fit using the same null model. MCMC samples from the posterior distributions of the parameters of the fitted models can be compared and can be used to calibrate the misfit between the observed data and the null model. Additionally, output from LIRA includes the MCMC draws of the multi-scale component images, so that the departure of the (simulated or observed) data from the point source null model can be examined visually. To demonstrate LIRA, an example of reconstructing Chandra images of high redshift quasars with jets is presented.

  2. Progress towards the development and testing of source reconstruction methods for neutron imaging of ICF implosions

    SciTech Connect

    Loomis, Eric; Grim, Gary; Wilde, Carl; Wilke, Mark; Wilson, Doug; Morgan, George; Tregillis, Ian; Clark, David; Finch, Joshua; Fittinghoff, D; Bower, D

    2010-01-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility (NIF) is an important and difficult task for the detailed understanding or high neutron yield inertial confinement fusion (ICF) implosions. These methods, once developed, must provide accurate images of the hot and cold fuel so that information about the implosion, such as symmetry and areal density, can be extracted. We are currently considering multiple analysis pathways for obtaining this source distribution of neutrons given a measured pinhole image with a scintillator and camera system. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations [E. Loomis et al. IFSA 2009]. We are currently striving to apply the technique to real data by applying a series of realistic effects that will be present for experimental images. These include various sources of noise, misalignment uncertainties at both the source and image planes, as well as scintillator and camera blurring. Some tests on the quality of image reconstructions have also been performed based on point resolution and Legendre mode improvement of recorded images. So far, the method has proven sufficient to overcome most of these experimental effects with continued devlopment.

  3. Cardiac motion correction based on partial angle reconstructed images in x-ray CT

    SciTech Connect

    Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom

    2015-05-15

    Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogram with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of

  4. An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.

    PubMed

    Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan

    2015-11-01

    The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed.

  5. Adaptive reconstruction of millimeter-wave radiometric images.

    PubMed

    Sarkis, Michel

    2012-09-01

    We present a robust method to reconstruct a millimeter-wave image from a passive sensor. The method operates directly on the raw samples from the radiometer. It allocates for each pixel to be estimated a patch in the space formed by all the raw samples of the image. It then estimates the noise in the patch by measuring some distances that reflect how far the samples are from forming a piecewise smooth surface. It then allocates a weight for each sample that defines its contribution in the pixel reconstruction. This is done via a smoothing Kernel that enforces the distances to have a piecewise smooth variation inside the patch. Results on real datasets show that our scheme leads to more contrast and less noise and the shape of an object is better preserved in a constructed image compared to state-of-the-art schemes. The proposed scheme produces better results even with low integration times, i.e., 10% of the total integration time used in our experiments.

  6. Absolute phase image reconstruction: a stochastic nonlinear filtering approach.

    PubMed

    Leitão, J N; Figueiredo, M A

    1998-01-01

    This paper formulates and proposes solutions to the problem of estimating/reconstructing the absolute (not simply modulo-2pi) phase of a complex random field from noisy observations of its real and imaginary parts. This problem is representative of a class of important imaging techniques such as interferometric synthetic aperture radar, optical interferometry, magnetic resonance imaging, and diffraction tomography. We follow a Bayesian approach; then, not only a probabilistic model of the observation mechanism, but also prior knowledge concerning the (phase) image to be reconstructed, are needed. We take as prior a nonsymmetrical half plane autoregressive (NSHP AR) Gauss-Markov random field (GMRF). Based on a reduced order state-space formulation of the (linear) NSHP AR model and on the (nonlinear) observation mechanism, a recursive stochastic nonlinear filter is derived, The corresponding estimates are compared with those obtained by the extended Kalman-Bucy filter, a classical linearizing approach to the same problem. A set of examples illustrate the effectiveness of the proposed approach. PMID:18276299

  7. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL

  8. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  9. Cardiac-state-driven CT image reconstruction algorithm for cardiac imaging

    NASA Astrophysics Data System (ADS)

    Cesmeli, Erdogan; Edic, Peter M.; Iatrou, Maria; Hsieh, Jiang; Gupta, Rajiv; Pfoh, Armin H.

    2002-05-01

    Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.

  10. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376

  11. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  12. Multi-material decomposition using statistical image reconstruction for spectral CT.

    PubMed

    Long, Yong; Fessler, Jeffrey A

    2014-08-01

    Spectral computed tomography (CT) provides information on material characterization and quantification because of its ability to separate different basis materials. Dual-energy (DE) CT provides two sets of measurements at two different source energies. In principle, two materials can be accurately decomposed from DECT measurements. However, many clinical and industrial applications require three or more material images. For triple-material decomposition, a third constraint, such as volume conservation, mass conservation or both, is required to solve three sets of unknowns from two sets of measurements. The recently proposed flexible image-domain (ID) multi-material decomposition) method assumes each pixel contains at most three materials out of several possible materials and decomposes a mixture pixel by pixel. We propose a penalized-likelihood (PL) method with edge-preserving regularizers for each material to reconstruct multi-material images using a similar constraint from sinogram data. We develop an optimization transfer method with a series of pixel-wise separable quadratic surrogate (PWSQS) functions to monotonically decrease the complicated PL cost function. The PWSQS algorithm separates pixels to allow simultaneous update of all pixels, but keeps the basis materials coupled to allow faster convergence rate than our previous proposed material- and pixel-wise SQS algorithms. Comparing with the ID method using 2-D fan-beam simulations, the PL method greatly reduced noise, streak and cross-talk artifacts in the reconstructed basis component images, and achieved much smaller root mean square errors.

  13. Generation of Rayleigh-wave dispersion images from multichannel seismic data using sparse signal reconstruction

    NASA Astrophysics Data System (ADS)

    Mun, Songchol; Bao, Yuequan; Li, Hui

    2015-11-01

    The accurate estimation of dispersion curves has been a key issue for ensuring high quality in geophysical surface wave exploration. Many studies have been carried out on the generation of a high-resolution dispersion image from array measurements. In this study, the sparse signal representation and reconstruction techniques are employed to obtain the high resolution Rayleigh-wave dispersion image from seismic wave data. First, a sparse representation of the seismic wave data is introduced, in which the signal is assumed to be sparse in terms of wave speed. Then, the sparse signal is reconstructed by optimization using l1-norm regularization, which gives the signal amplitude spectrum as a function of wave speed. A dispersion image in the f-v domain is generated by arranging the sparse spectra for all frequency slices in the frequency range. Finally, to show the efficiency of the proposed approach, the Surfbar-2 field test data, acquired by B. Luke and colleagues at the University of Nevada Las Vegas, are analysed. By comparing the real-field dispersion image with the results from other methods, the high mode-resolving ability of the proposed approach is demonstrated, particularly for a case with strongly coherent modes.

  14. Reconstructing the open-field magnetic geometry of solar corona using coronagraph images

    NASA Astrophysics Data System (ADS)

    Uritsky, Vadim M.; Davila, Joseph M.; Jones, Shaela; Burkepile, Joan

    2015-04-01

    The upcoming Solar Probe Plus and Solar Orbiter missions will provide an new insight into the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Physical interpretation of these observations will be dependent on the accurate reconstruction of the large-scale coronal magnetic field. We argue that such reconstruction can be performed using photospheric extrapolation codes constrained by white-light coronagraph images. The field extrapolation component of this project is featured in a related presentation by S. Jones et al. Here, we focus on our image-processing algorithms conducting an automated segmentation of coronal loop structures. In contrast to the previously proposed segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, our technique focuses on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. Coronagraph images are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction followed by an adaptive angular differentiation. An adjustable threshold is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to identify valid features against a noisy background. The extracted coronal features are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms. Two versions of the method optimized for processing ground-based (Mauna Loa Solar Observatory) and satellite-based (STEREO Cor1 and Cor2) coronagraph images are being developed.

  15. Novel designed magnetic leakage testing sensor with GMR for image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Suzuki, Takayuki

    2012-04-01

    Authors had developed an image reconstruction algorithm that can accurately reconstruct images of flaws from data obtained using conventional ECT sensors few years ago. The developed reconstruction algorithm is designed for data which is assumed to be obtained with spatial uniform magnetic field on the target surface. On the other hand, the conventional ECT sensor author used is designed in such a manner that when the magnetic field is imposed on the target surface, the strength of the magnetic field is maximized. This violation of the assumption ruins the algorithm simplicity because it needs to employ complemental response functions called"LSF"for long line flaw which is not along original algorithm design.In order to obtain an experimental result which proves the validity of original algorithm with only one response function, the authors have developed a prototype sensor for magnetic flux leakage testing that satisfy the requirement of original algorithm last year. The developed sensor comprises a GMR magnetic field sensor to detect a static magnetic field and two magnets adjacent to the GMR sensor to magnetize the target specimen. However, obtained data had insufficient accuracy due to weakness of the strength of the magnet. Therefore author redesigned it with much stronger magnet this year. Obtained data with this new sensor shows that the algorithm is most likely to work well with only one response function for this type probe.

  16. Algorithms for image reconstruction from projections in optical tomography

    NASA Astrophysics Data System (ADS)

    Zhu, Lin-Sheng; Huang, Su-Yi

    1993-09-01

    It is well known that the determination ofthe temperature field by holographic interferometry is a successful method in the measurement of thermophysics. In this paper some practical algorithms for image reconstruction from projections are presented to produce the temperature field. The algorithms developed consists in that the Radon transform integral equation is directly solved by grid method and that the Radon inversion formula is numerically evaluated by twodimensional Fourier transform technique. Some examples are given to verify the validity of the above methods in practice.

  17. Image reconstruction by the speckle-masking method.

    PubMed

    Weigelt, G; Wirnitzer, B

    1983-07-01

    Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography.

  18. Image reconstruction by the speckle-masking method.

    PubMed

    Weigelt, G; Wirnitzer, B

    1983-07-01

    Speckle masking is a method for reconstructing high-resolution images of general astronomical objects from stellar speckle interferograms. In speckle masking no unresolvable star is required within the isoplanatic patch of the object. We present digital applications of speckle masking to close spectroscopic double stars. The speckle interferograms were recorded with the European Southern Observatory's 3.6-m telescope. Diffraction-limited resolution (0.03 arc see) was achieved, which is about 30 times higher than the resolution of conventional astrophotography. PMID:19718124

  19. Fast Multigrid Techniques in Total Variation-Based Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Oman, Mary Ellen

    1996-01-01

    Existing multigrid techniques are used to effect an efficient method for reconstructing an image from noisy, blurred data. Total Variation minimization yields a nonlinear integro-differential equation which, when discretized using cell-centered finite differences, yields a full matrix equation. A fixed point iteration is applied with the intermediate matrix equations solved via a preconditioned conjugate gradient method which utilizes multi-level quadrature (due to Brandt and Lubrecht) to apply the integral operator and a multigrid scheme (due to Ewing and Shen) to invert the differential operator. With effective preconditioning, the method presented seems to require Omicron(n) operations. Numerical results are given for a two-dimensional example.

  20. Noise and signal properties in PSF-based fully 3D PET image reconstruction: an experimental evaluation

    NASA Astrophysics Data System (ADS)

    Tong, S.; Alessio, A. M.; Kinahan, P. E.

    2010-03-01

    The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in

  1. Detection and 3D reconstruction of traffic signs from multiple view color images

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno

    2013-03-01

    3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for

  2. Transaxial system models for jPET-D4 image reconstruction.

    PubMed

    Yamaya, Taiga; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki; Kitamura, Keishi; Hasegawa, Tomoyuki; Haneishi, Hideaki; Yoshida, Eiji; Inadama, Naoko; Murayama, Hideo

    2005-11-21

    A high-performance brain PET scanner, jPET-D4, which provides four-layer depth-of-interaction (DOI) information, is being developed to achieve not only high spatial resolution, but also high scanner sensitivity. One technical issue to be dealt with is the data dimensions which increase in proportion to the square of the number of DOI layers. It is, therefore, difficult to apply algebraic or statistical image reconstruction methods directly to DOI-PET, though they improve image quality through accurate system modelling. The process that requires the most computational time and storage space is the calculation of the huge number of system matrix elements. The DOI compression (DOIC) method, which we have previously proposed, reduces data dimensions by a factor of 1/5. In this paper, we propose a transaxial imaging system model optimized for jPET-D4 with the DOIC method. The proposed model assumes that detector response functions (DRFs) are uniform along line-of-responses (LORs). Then each element of the system matrix is calculated as the summed intersection lengths between a pixel and sub-LORs weighted by a value from the DRF look-up-table. 2D numerical simulation results showed that the proposed model cut the calculation time by a factor of several hundred while keeping image quality, compared with the accurate system model. A 3D image reconstruction with the on-the-fly calculation of the system matrix is within the practical limitations by incorporating the proposed model and the DOIC method with one-pass accelerated iterative methods.

  3. Transaxial system models for jPET-D4 image reconstruction

    NASA Astrophysics Data System (ADS)

    Yamaya, Taiga; Hagiwara, Naoki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki; Kitamura, Keishi; Hasegawa, Tomoyuki; Haneishi, Hideaki; Yoshida, Eiji; Inadama, Naoko; Murayama, Hideo

    2005-11-01

    A high-performance brain PET scanner, jPET-D4, which provides four-layer depth-of-interaction (DOI) information, is being developed to achieve not only high spatial resolution, but also high scanner sensitivity. One technical issue to be dealt with is the data dimensions which increase in proportion to the square of the number of DOI layers. It is, therefore, difficult to apply algebraic or statistical image reconstruction methods directly to DOI-PET, though they improve image quality through accurate system modelling. The process that requires the most computational time and storage space is the calculation of the huge number of system matrix elements. The DOI compression (DOIC) method, which we have previously proposed, reduces data dimensions by a factor of 1/5. In this paper, we propose a transaxial imaging system model optimized for jPET-D4 with the DOIC method. The proposed model assumes that detector response functions (DRFs) are uniform along line-of-responses (LORs). Then each element of the system matrix is calculated as the summed intersection lengths between a pixel and sub-LORs weighted by a value from the DRF look-up-table. 2D numerical simulation results showed that the proposed model cut the calculation time by a factor of several hundred while keeping image quality, compared with the accurate system model. A 3D image reconstruction with the on-the-fly calculation of the system matrix is within the practical limitations by incorporating the proposed model and the DOIC method with one-pass accelerated iterative methods.

  4. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  5. Accurate determination of imaging modality using an ensemble of text- and image-based classifiers.

    PubMed

    Kahn, Charles E; Kalpathy-Cramer, Jayashree; Lam, Cesar A; Eldredge, Christina E

    2012-02-01

    Imaging modality can aid retrieval of medical images for clinical practice, research, and education. We evaluated whether an ensemble classifier could outperform its constituent individual classifiers in determining the modality of figures from radiology journals. Seventeen automated classifiers analyzed 77,495 images from two radiology journals. Each classifier assigned one of eight imaging modalities--computed tomography, graphic, magnetic resonance imaging, nuclear medicine, positron emission tomography, photograph, ultrasound, or radiograph-to each image based on visual and/or textual information. Three physicians determined the modality of 5,000 randomly selected images as a reference standard. A "Simple Vote" ensemble classifier assigned each image to the modality that received the greatest number of individual classifiers' votes. A "Weighted Vote" classifier weighted each individual classifier's vote based on performance over a training set. For each image, this classifier's output was the imaging modality that received the greatest weighted vote score. We measured precision, recall, and F score (the harmonic mean of precision and recall) for each classifier. Individual classifiers' F scores ranged from 0.184 to 0.892. The simple vote and weighted vote classifiers correctly assigned 4,565 images (F score, 0.913; 95% confidence interval, 0.905-0.921) and 4,672 images (F score, 0.934; 95% confidence interval, 0.927-0.941), respectively. The weighted vote classifier performed significantly better than all individual classifiers. An ensemble classifier correctly determined the imaging modality of 93% of figures in our sample. The imaging modality of figures published in radiology journals can be determined with high accuracy, which will improve systems for image retrieval.

  6. Quasi Monte Carlo-based Isotropic Distribution of Gradient Directions for Improved Reconstruction Quality of 3D EPR Imaging

    PubMed Central

    Ahmad, Rizwan; Deng, Yuanmu; Vikram, Deepti S.; Clymer, Bradley; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan

    2007-01-01

    In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstructed image along with fast and reliable data acquisition is highly desirable for many biological applications. An accurate representation of uniform distribution of projection data is necessary to ensure high reconstruction quality. The current techniques for data acquisition suffer from nonuniformities or local anisotropies in the distribution of projection data and present a poor approximation of a true uniform and isotropic distribution. In this work, we have implemented a technique based on Quasi-Monte Carlo method to acquire projections with more uniform and isotropic distribution of data over a 3D acquisition space. The proposed technique exhibits improvements in the reconstruction quality in terms of both mean-square-error and visual judgment. The effectiveness of the suggested technique is demonstrated using computer simulations and 3D EPRI experiments. The technique is robust and exhibits consistent performance for different object configurations and orientations. PMID:17095271

  7. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  8. Human tooth and root canal morphology reconstruction using magnetic resonance imaging

    PubMed Central

    DRĂGAN, OANA CARMEN; FĂRCĂŞANU, ALEXANDRU ŞTEFAN; CÂMPIAN, RADU SEPTIMIU; TURCU, ROMULUS VALERIU FLAVIU

    2016-01-01

    Background and aims Visualization of the internal and external root canal morphology is very important for a successful endodontic treatment; however, it seems to be difficult considering the small size of the tooth and the complexity of the root canal system. Film-based or digital conventional radiographic techniques as well as cone beam computed tomography provide limited information on the dental pulp anatomy or have harmful effects. A new non-invasive diagnosis tool is magnetic resonance imaging, due to its ability of imaging both hard and soft tissues. The aim of this study was to demonstrate magnetic resonance imaging to be a useful tool for imaging the anatomic conditions of the external and internal root canal morphology for endodontic purposes. Methods The endodontic system of one freshly extracted wisdom tooth, chosen for its well-known anatomical variations, was mechanically shaped using a hybrid technique. After its preparation, the tooth was immersed into a recipient with saline solution and magnetic resonance imaged immediately. A Bruker Biospec magnetic resonance imaging scanner operated at 7.04 Tesla and based on Avance III radio frequency technology was used. InVesalius software was employed for the 3D reconstruction of the tooth scanned volume. Results The current ex-vivo experiment shows the accurate 3D volume rendered reconstruction of the internal and external morphology of a human extracted and endodontically treated tooth using a dataset of images acquired by magnetic resonance imaging. The external lingual and vestibular views of the tooth as well as the occlusal view of the pulp chamber, the access cavity, the distal canal opening on the pulp chamber floor, the coronal third of the root canals, the degree of root separation and the apical fusion of the two mesial roots, details of the apical region, root canal curvatures, furcal region and interradicular root grooves could be clearly bordered. Conclusions Magnetic resonance imaging offers 3

  9. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  10. Image reconstruction from limited angle projections collected by multisource interior x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Liu, Baodong; Wang, Ge; Ritman, Erik L.; Cao, Guohua; Lu, Jianping; Zhou, Otto; Zeng, Li; Yu, Hengyong

    2011-10-01

    A multisource x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultrafast micro-CT for dynamic small animal imaging, and two methods are employed to perform interior reconstruction from a limited number of projections collected by the multisource interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows that it is possible to make an ultrafast micro-CT using a multisource interior x-ray system scheme combined with the state-of-the-art interior tomography.

  11. Image reconstruction from limited angle projections collected by multisource interior x-ray imaging systems.

    PubMed

    Liu, Baodong; Wang, Ge; Ritman, Erik L; Cao, Guohua; Lu, Jianping; Zhou, Otto; Zeng, Li; Yu, Hengyong

    2011-10-01

    A multisource x-ray interior imaging system with limited angle scanning is investigated to study the possibility of building an ultrafast micro-CT for dynamic small animal imaging, and two methods are employed to perform interior reconstruction from a limited number of projections collected by the multisource interior x-ray system. The first is total variation minimization with the steepest descent search (TVM-SD) and the second is total difference minimization with soft-threshold filtering (TDM-STF). Comprehensive numerical simulations and animal studies are performed to validate the associated reconstructed methods and demonstrate the feasibility and application of the proposed system configuration. The image reconstruction results show that both of the two reconstruction methods can significantly improve the image quality and the TDM-SFT is slightly superior to the TVM-SD. Finally, quantitative image analysis shows that it is possible to make an ultrafast micro-CT using a multisource interior x-ray system scheme combined with the state-of-the-art interior tomography.

  12. Bayesian Super-Resolved Surface Reconstruction From Multiple Images

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Cheesman, P.; Maluf, D. A.; Morris, R. D.; Swanson, Keith (Technical Monitor)

    1999-01-01

    Bayesian inference has been wed successfully for many problems where the aim is to infer the parameters of a model of interest. In this paper we formulate the three dimensional reconstruction problem as the problem of inferring the parameters of a surface model from image data, and show how Bayesian methods can be used to estimate the parameters of this model given the image data. Thus we recover the three dimensional description of the scene. This approach also gives great flexibility. We can specify the geometrical properties of the model to suit our purpose, and can also use different models for how the surface reflects the light incident upon it. In common with other Bayesian inference problems, the estimation methodology requires that we can simulate the data that would have been recoded for any values of the model parameters. In this application this means that if we have image data we must be able to render the surface model. However it also means that we can infer the parameters of a model whose resolution can be chosen irrespective of the resolution of the images, and may be super-resolved. We present results of the inference of surface models from simulated aerial photographs for the case of super-resolution, where many surface elements project into a single pixel in the low-resolution images.

  13. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  14. A High Precision Terahertz Wave Image Reconstruction Algorithm.

    PubMed

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  15. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  16. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  17. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. PMID:23846472

  18. 3D reconstruction of concave surfaces using polarisation imaging

    NASA Astrophysics Data System (ADS)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  19. Reconstruction of mechanically recorded sound by image processing

    SciTech Connect

    Fadeyev, Vitaliy; Haber, Carl

    2003-03-26

    Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with no or minimal contact, by measuring the groove shape using precision metrology methods and digital image processing. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Various aspects of this approach are discussed. A feasibility test is reported which used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record. Comparisons are presented with stylus playback of the record and with a digitally re-mastered version of the original magnetic recording. A more extensive implementation of this approach, with dedicated hardware and software, is considered.

  20. ISIS: Image reconstruction experiments and comparison of various array configurations

    NASA Astrophysics Data System (ADS)

    Reinheimer, T.; Hofmann, K.-H.; Weigelt, G.

    1987-08-01

    The application of speckle masking (triple correlation processing) to coherent, telescope arrays in space is introduced. True diffraction-limited images are obtained since speckle masking is the solution of the phase problem in speckle interferometry. For example, a 14 m array can yield a resolution of 0.004 arcsec at 200 nm wavelength. Resolution of 0.000001 arcsec can be obtained with a 40 km array at 200nm. Computer simulations of optical aperture synthesis by speckle masking are shown. Simulations of a two-dimensional ring-shaped array and of a linear one-dimensional array are described. The dependence of the signal-to-noise ratio in the reconstructed image on photon noise is discussed.

  1. Accurate modeling and reconstruction of three-dimensional percolating filamentary microstructures from two-dimensional micrographs via dilation-erosion method

    SciTech Connect

    Guo, En-Yu; Chawla, Nikhilesh; Jing, Tao; Torquato, Salvatore; Jiao, Yang

    2014-03-01

    Heterogeneous materials are ubiquitous in nature and synthetic situations and have a wide range of important engineering applications. Accurate modeling and reconstructing three-dimensional (3D) microstructure of topologically complex materials from limited morphological information such as a two-dimensional (2D) micrograph is crucial to the assessment and prediction of effective material properties and performance under extreme conditions. Here, we extend a recently developed dilation–erosion method and employ the Yeong–Torquato stochastic reconstruction procedure to model and generate 3D austenitic–ferritic cast duplex stainless steel microstructure containing percolating filamentary ferrite phase from 2D optical micrographs of the material sample. Specifically, the ferrite phase is dilated to produce a modified target 2D microstructure and the resulting 3D reconstruction is eroded to recover the percolating ferrite filaments. The dilation–erosion reconstruction is compared with the actual 3D microstructure, obtained from serial sectioning (polishing), as well as the standard stochastic reconstructions incorporating topological connectedness information. The fact that the former can achieve the same level of accuracy as the latter suggests that the dilation–erosion procedure is tantamount to incorporating appreciably more topological and geometrical information into the reconstruction while being much more computationally efficient. - Highlights: • Spatial correlation functions used to characterize filamentary ferrite phase • Clustering information assessed from 3D experimental structure via serial sectioning • Stochastic reconstruction used to generate 3D virtual structure 2D micrograph • Dilation–erosion method to improve accuracy of 3D reconstruction.

  2. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    SciTech Connect

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-04-15

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4{+-}1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  3. High-quality image reconstruction method for ptychography with partially coherent illumination

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wang, Shouyu; Veetil, Suhas; Gao, Shumei; Liu, Cheng; Zhu, Jianqiang

    2016-06-01

    The influence of partial coherence on the image reconstruction in ptychography is analyzed, and a simple method is proposed to reconstruct a clear image for the weakly scattering object with partially coherent illumination. It is demonstrated numerically and experimentally that by illuminating a weakly scattering object with a divergent radiation beam, and doing the reconstruction only from the bright-field diffraction data, the mathematical ambiguity and corresponding reconstruction errors related to the partial coherency can be remarkably suppressed, thus clear reconstructed images can be generated even under seriously incoherent illumination.

  4. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  5. Iterative PET Image Reconstruction Using Translation Invariant Wavelet Transform

    PubMed Central

    Zhou, Jian; Senhadji, Lotfi; Coatrieux, Jean-Louis; Luo, Limin

    2009-01-01

    The present work describes a Bayesian maximum a posteriori (MAP) method using a statistical multiscale wavelet prior model. Rather than using the orthogonal discrete wavelet transform (DWT), this prior is built on the translation invariant wavelet transform (TIWT). The statistical modeling of wavelet coefficients relies on the generalized Gaussian distribution. Image reconstruction is performed in spatial domain with a fast block sequential iteration algorithm. We study theoretically the TIWT MAP method by analyzing the Hessian of the prior function to provide some insights on noise and resolution properties of image reconstruction. We adapt the key concept of local shift invariance and explore how the TIWT MAP algorithm behaves with different scales. It is also shown that larger support wavelet filters do not offer better performance in contrast recovery studies. These theoretical developments are confirmed through simulation studies. The results show that the proposed method is more attractive than other MAP methods using either the conventional Gibbs prior or the DWT-based wavelet prior. PMID:21869846

  6. High-efficiency imaging through scattering media in noisy environments via sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Tengfei; Shao, Xiaopeng; Gong, Changmei; Li, Huijuan; Liu, Jietao

    2015-11-01

    High-efficiency imaging through highly scattering media is urgently desired for various applications. Imaging speed and imaging quality, which determine the imaging efficiency, are two inevitable indices for any optical imaging area. Based on random walk analysis in statistical optics, the elements in a transmission matrix (TM) actually obey Gaussian distribution. Instead of dealing with large amounts of data contained in TM and speckle pattern, imaging can be achieved with only a small number of the data via sparse representation. We make a detailed mathematical analysis of the elements-distribution of the TM of a scattering imaging system and study the imaging method of sparse image reconstruction (SIR). More specifically, we focus on analyzing the optimum sampling rates for the imaging of different structures of targets, which significantly influences both imaging speed and imaging quality. Results show that the optimum sampling rate exists in any noise-level environment if a target can be sparsely represented, and by searching for the optimum sampling rate, it can effectively balance the imaging quality and the imaging speed, which can maximize the imaging efficiency. This work is helpful for practical applications of imaging through highly scattering media with the SIR method.

  7. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  8. A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction

    PubMed Central

    Konkle, Justin J.; Goodwill, Patrick W.; Hensley, Daniel W.; Orendorff, Ryan D.; Lustig, Michael; Conolly, Steven M.

    2015-01-01

    Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications. PMID:26495839

  9. A Convex Formulation for Magnetic Particle Imaging X-Space Reconstruction.

    PubMed

    Konkle, Justin J; Goodwill, Patrick W; Hensley, Daniel W; Orendorff, Ryan D; Lustig, Michael; Conolly, Steven M

    2015-01-01

    Magnetic Particle Imaging (mpi) is an emerging imaging modality with exceptional promise for clinical applications in rapid angiography, cell therapy tracking, cancer imaging, and inflammation imaging. Recent publications have demonstrated quantitative mpi across rat sized fields of view with x-space reconstruction methods. Critical to any medical imaging technology is the reliability and accuracy of image reconstruction. Because the average value of the mpi signal is lost during direct-feedthrough signal filtering, mpi reconstruction algorithms must recover this zero-frequency value. Prior x-space mpi recovery techniques were limited to 1d approaches which could introduce artifacts when reconstructing a 3d image. In this paper, we formulate x-space reconstruction as a 3d convex optimization problem and apply robust a priori knowledge of image smoothness and non-negativity to reduce non-physical banding and haze artifacts. We conclude with a discussion of the powerful extensibility of the presented formulation for future applications.

  10. 3D Dose reconstruction: Banding artefacts in cine mode EPID images during VMAT delivery

    NASA Astrophysics Data System (ADS)

    Woodruff, H. C.; Greer, P. B.

    2013-06-01

    Cine (continuous) mode images obtained during VMAT delivery are heavily degraded by banding artefacts. We have developed a method to reconstruct the pulse sequence (and hence dose deposited) from open field images. For clinical VMAT fields we have devised a frame averaging strategy that greatly improves image quality and dosimetric information for three-dimensional dose reconstruction.

  11. Image reconstruction by regularized nonlinear inversion--joint estimation of coil sensitivities and image content.

    PubMed

    Uecker, Martin; Hohage, Thorsten; Block, Kai Tobias; Frahm, Jens

    2008-09-01

    The use of parallel imaging for scan time reduction in MRI faces problems with image degradation when using GRAPPA or SENSE for high acceleration factors. Although an inherent loss of SNR in parallel MRI is inevitable due to the reduced measurement time, the sensitivity to image artifacts that result from severe undersampling can be ameliorated by alternative reconstruction methods. While the introduction of GRAPPA and SENSE extended MRI reconstructions from a simple unitary transformation (Fourier transform) to the inversion of an ill-conditioned linear system, the next logical step is the use of a nonlinear inversion. Here, a respective algorithm based on a Newton-type method with appropriate regularization terms is demonstrated to improve the performance of autocalibrating parallel MRI--mainly due to a better estimation of the coil sensitivity profiles. The approach yields images with considerably reduced artifacts for high acceleration factors and/or a low number of reference lines.

  12. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    PubMed

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  13. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image.

    PubMed

    Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping

    2016-02-20

    Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach.

  14. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image

    PubMed Central

    Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping

    2016-01-01

    Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289

  15. Targeting accurate object extraction from an image: a comprehensive study of natural image matting.

    PubMed

    Zhu, Qingsong; Shao, Ling; Li, Xuelong; Wang, Lei

    2015-02-01

    With the development of digital multimedia technologies, image matting has gained increasing interests from both academic and industrial communities. The purpose of image matting is to precisely extract the foreground objects with arbitrary shapes from an image or a video frame for further editing. It is generally known that image matting is inherently an ill-posed problem because we need to output three images out of only one input image. In this paper, we provide a comprehensive survey of the existing image matting algorithms and evaluate their performance. In addition to the blue screen matting, we systematically divide all existing natural image matting methods into four categories: 1) color sampling-based; 2) propagation-based; 3) combination of sampling-based and propagation-based; and 4) learning-based approaches. Sampling-based methods assume that the foreground and background colors of an unknown pixel can be explicitly estimated by examining nearby pixels. Propagation-based methods are instead based on the assumption that foreground and background colors are locally smooth. Learning-based methods treat the matting process as a supervised or semisupervised learning problem. Via the learning process, users can construct a linear or nonlinear model between the alpha mattes and the image colors using a training set to estimate the alpha matte of an unknown pixel without any assumption about the characteristics of the testing image. With three benchmark data sets, the various matting algorithms are evaluated and compared using several metrics to demonstrate the strengths and weaknesses of each method both quantitatively and qualitatively. Finally, we conclude this paper by outlining the research trends and suggesting a number of promising directions for future development. PMID:25423658

  16. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  17. Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images

    NASA Astrophysics Data System (ADS)

    Deng, Yuanbo; Chu, Daping

    2016-03-01

    The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.

  18. An accurate method for energy spectrum reconstruction of Linac beams based on EPID measurements of scatter radiation

    NASA Astrophysics Data System (ADS)

    Juste, B.; Miró, R.; Verdú, G.; Santos, A.

    2014-06-01

    This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.

  19. Development of anatomically and dielectrically accurate breast phantoms for microwave imaging applications

    NASA Astrophysics Data System (ADS)

    O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.

    2014-05-01

    Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.

  20. Accurate three-dimensional pose recognition from monocular images using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Diaz-Ramirez, Victor H.; Kober, Vitaly; Montemayor, Antonio S.; Pantrigo, Juan J.

    2016-06-01

    An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

  1. Characterization and Reconstruction of Nanolipoprotein Particles (Nlps) by Cryo-EM and Image Reconstruction

    SciTech Connect

    Pesavento, J B; Morgan, D; Bermingham, R; Zamora, D; Chromy, B; Segelke, B; Coleman, M; Xing, L; Cheng, H; Bench, G; Hoeprich, P

    2007-06-07

    Nanolipoprotein particles (NLPs) are small 10-20 nm diameter assemblies of apolipoproteins and lipids. At Lawrence Livermore National Laboratory (LLNL), they have constructed multiple variants of these assemblies. NLPs have been generated from a variety of lipoproteins, including apolipoprotein Al, apolipophorin III, apolipoprotein E4 22K, and MSP1T2 (nanodisc, Inc.). Lipids used included DMPC (bulk of the bilayer material), DMPE (in various amounts), and DPPC. NLPs were made in either the absence or presence of the detergent cholate. They have collected electron microscopy data as a part of the characterization component of this research. Although purified by size exclusion chromatography (SEC), samples are somewhat heterogeneous when analyzed at the nanoscale by negative stained cryo-EM. Images reveal a broad range of shape heterogeneity, suggesting variability in conformational flexibility, in fact, modeling studies point to dynamics of inter-helical loop regions within apolipoproteins as being a possible source for observed variation in NLP size. Initial attempts at three-dimensional reconstructions have proven to be challenging due to this size and shape disparity. They are pursuing a strategy of computational size exclusion to group particles into subpopulations based on average particle diameter. They show here results from their ongoing efforts at statistically and computationally subdividing NLP populations to realize greater homogeneity and then generate 3D reconstructions.

  2. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  3. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  4. Avoiding Complications and Technical Variability During Arthroscopically Assisted Transtibial ACL Reconstructions by Using a C-Arm With Image Intensifier

    PubMed Central

    Trentacosta, Natasha; Fillar, Allison Liefeld; Liefeld, Cynthia Pierce; Hossack, Michael D.; Levy, I. Martin

    2014-01-01

    Background: Surgical reconstruction of the anterior cruciate ligament (ACL) can be complicated by incorrect and variable tunnel placement, graft tunnel mismatch, cortical breaches, and inadequate fixation due to screw divergence. This is the first report describing the use of a C-arm with image intensifier employed for the sole purpose of eliminating those complications during transtibial ACL reconstruction. Purpose: To determine if the use of a C-arm with image intensifier during arthroscopically assisted transtibial ACL reconstruction (IIAA-TACLR) eliminated common complications associated with bone–patellar tendon–bone ACL reconstruction, including screw divergence, cortical breaches, graft-tunnel mismatch, and improper positioning of the femoral and tibial tunnels. Study Design: Case series; Level of evidence, 4. Methods: A total of 110 consecutive patients (112 reconstructed knees) underwent identical IIAA-TACLR using a bone–patellar tendon–bone autograft performed by a single surgeon. Intra- and postoperative radiographic images and operative reports were evaluated for each patient looking for evidence of cortical breeching and screw divergence. Precision of femoral tunnel placement was evaluated using a sector map modified from Bernard et al. Graft recession distance and tibial α angles were recorded. Results: There were no femoral or tibial cortical breaches noted intraoperatively or on postoperative images. There were no instances of loss of fixation screw major thread engagement. There were no instances of graft-tunnel mismatch. The positions of the femoral tunnels were accurate and precise, falling into the desired sector of our location map (sector 1). Tibial α angles and graft recession distances varied widely. Conclusion: The use of the C-arm with image intensifier enabled accurate and precise tunnel placement and completely eliminated cortical breach, graft-tunnel mismatch, and screw divergence during IIAA-TACLR by allowing incremental

  5. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  6. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  7. Accurate color synthesis of three-dimensional objects in an image.

    PubMed

    Xin, John H; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing. PMID:15139423

  8. Accurate and reliable segmentation of the optic disc in digital fundus images

    PubMed Central

    Giachetti, Andrea; Ballerini, Lucia; Trucco, Emanuele

    2014-01-01

    Abstract. We describe a complete pipeline for the detection and accurate automatic segmentation of the optic disc in digital fundus images. This procedure provides separation of vascular information and accurate inpainting of vessel-removed images, symmetry-based optic disc localization, and fitting of incrementally complex contour models at increasing resolutions using information related to inpainted images and vessel masks. Validation experiments, performed on a large dataset of images of healthy and pathological eyes, annotated by experts and partially graded with a quality label, demonstrate the good performances of the proposed approach. The method is able to detect the optic disc and trace its contours better than the other systems presented in the literature and tested on the same data. The average error in the obtained contour masks is reasonably close to the interoperator errors and suitable for practical applications. The optic disc segmentation pipeline is currently integrated in a complete software suite for the semiautomatic quantification of retinal vessel properties from fundus camera images (VAMPIRE). PMID:26158034

  9. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  10. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    PubMed

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  11. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    SciTech Connect

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  12. SU-E-J-76: CBCT Reconstruction of a Full Couch Using Rigid Registration and Pre-Scanned Couch Image and Its Clinical Application

    SciTech Connect

    Hu, E; Lasio, G; Lee, M; Chen, S; Yi, B

    2015-06-15

    Purpose: Only a part of a treatment couch is reconstructed in CBCT due to the limited field of view (FOV). This often generates inaccurate results in the delivered dose evaluation with CBCT and more noise in the CBCT reconstruction. Full reconstruction of the couch at treatment setup can be used for more accurate exit beam dosimetry. The goal of this study is to develop a method to reconstruct a full treatment couch using a pre-scanned couch image and rigid registration. Methods: A full couch (Exact Couch, Varian) model image was reconstructed by rigidly registering and combining two sets of partial CBCT images. The full couch model includes three parts: two side rails and a couch top. A patient CBCT was reconstructed with reconstruction grid size larger than the physical field of view to include the full couch. The image quality of the couch is not good due to data truncation, but good enough to allow rigid registration of the couch. A composite CBCT image of the patient plus couch has been generated from the original reconstruction by replacing couch portion with the pre-acquired model couch, rigidly registered to the original scan. We evaluated the clinical usefulness of this method by comparing treatment plans generated on the original and on the modified scans. Results: The full couch model could be attached to a patient CBCT image set via rigid image registration. Plan DVHs showed 1∼2% difference between plans with and without full couch modeling. Conclusion: The proposed method generated a full treatment couch CBCT model, which can be successfully registered to the original patient image. This method was also shown to be useful in generating more accurate dose distributions, by lowering 1∼2% dose in PTV and a few other critical organs. Part of this study is supported by NIH R01CA133539.

  13. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    SciTech Connect

    Virador, Patrick R.G.

    2000-04-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  14. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  15. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

    PubMed Central

    Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin

    2016-01-01

    Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.

  16. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection.

    PubMed

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-06-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96.

  17. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    PubMed

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  18. Modulus reconstruction from prostate ultrasound images using finite element modeling

    NASA Astrophysics Data System (ADS)

    Yan, Zhennan; Zhang, Shaoting; Alam, S. Kaisar; Metaxas, Dimitris N.; Garra, Brian S.; Feleppa, Ernest J.

    2012-03-01

    In medical diagnosis, use of elastography is becoming increasingly more useful. However, treatments usually assume a planar compression applied to tissue surfaces and measure the deformation. The stress distribution is relatively uniform close to the surface when using a large, flat compressor but it diverges gradually along tissue depth. Generally in prostate elastography, the transrectal probes used for scanning and compression are cylindrical side-fire or rounded end-fire probes, and the force is applied through the rectal wall. These make it very difficult to detect cancer in prostate, since the rounded contact surfaces exaggerate the non-uniformity of the applied stress, especially for the distal, anterior prostate. We have developed a preliminary 2D Finite Element Model (FEM) to simulate prostate deformation in elastography. The model includes a homogeneous prostate with a stiffer tumor in the proximal, posterior region of the gland. A force is applied to the rectal wall to deform the prostate, strain and stress distributions can be computed from the resultant displacements. Then, we assume the displacements as boundary condition and reconstruct the modulus distribution (inverse problem) using linear perturbation method. FEM simulation shows that strain and strain contrast (of the lesion) decrease very rapidly with increasing depth and lateral distance. Therefore, lesions would not be clearly visible if located far away from the probe. However, the reconstructed modulus image can better depict relatively stiff lesion wherever the lesion is located.

  19. Nonlinear algorithm for task-specific tomosynthetic image reconstruction

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Underhill, Hunter A.; Hemler, Paul F.; Lavery, John E.

    1999-05-01

    This investigation defines and tests a simple, nonlinear, task-specific method for rapid tomosynthetic reconstruction of radiographic images designed to allow an increase in specificity at the expense of sensitivity. Representative lumpectomy specimens containing cancer from human breasts were radiographed with a digital mammographic machine. Resulting projective data were processed to yield a series of tomosynthetic slices distributed throughout the breast. Five board-certified radiologists compared tomographic displays of these tissues processed both linearly (control) and nonlinearly (test) and ranked them in terms of their perceived interpretability. In another task, a different set of nine observers estimated the relative depths of six holes bored in a solid Lucite block as perceived when observed in three dimensions as a tomosynthesized series of test and control slices. All participants preferred the nonlinearly generated tomosynthetic mammograms to those produced conventionally, with or without subsequent deblurring by means of iterative deconvolution. The result was similar (p less than 0.015) when the hole-depth experiment was performed objectively. We therefore conclude for certain tasks that are unduly compromised by tomosynthetic blurring, the nonlinear tomosynthetic reconstruction method described here may improve diagnostic performance with a negligible increase in cost or complexity.

  20. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337