Sample records for standard reconstruction algorithms

  1. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemkiewicz, J; Palmiotti, A; Miner, M

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less

  2. GPU implementation of prior image constrained compressed sensing (PICCS)

    NASA Astrophysics Data System (ADS)

    Nett, Brian E.; Tang, Jie; Chen, Guang-Hong

    2010-04-01

    The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.

  3. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  4. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  5. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy.

    PubMed

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-10-01

    Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.

  6. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy

    PubMed Central

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-01-01

    Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381

  7. Diagnostic Performance of a Novel Coronary CT Angiography Algorithm: Prospective Multicenter Validation of an Intracycle CT Motion Correction Algorithm for Diagnostic Accuracy.

    PubMed

    Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K

    2018-06-01

    Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.

  8. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  9. Evaluation of an iterative model-based CT reconstruction algorithm by intra-patient comparison of standard and ultra-low-dose examinations.

    PubMed

    Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A

    2018-01-01

    Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.

  10. Comparison among Reconstruction Algorithms for Quantitative Analysis of 11C-Acetate Cardiac PET Imaging.

    PubMed

    Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li

    2018-01-01

    Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.

  11. Photoacoustic image reconstruction via deep learning

    NASA Astrophysics Data System (ADS)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  12. CT image reconstruction with half precision floating-point values.

    PubMed

    Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc

    2011-07-01

    Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.

  13. Reduced projection angles for binary tomography with particle aggregation.

    PubMed

    Al-Rifaie, Mohammad Majid; Blackwell, Tim

    This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.

  14. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    NASA Astrophysics Data System (ADS)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the spatial resolution bar patterns demonstrated that the BONE (GE) and B46f (Siemens) showed higher spatial resolution compared to the STANDARD (GE) or B30f (Siemens) reconstruction algorithms typically used for routine body CT imaging. Only the sharper images were deemed clinically acceptable for the evaluation of diffuse lung disease (e.g. emphysema). Quantitative analyses of the extent of emphysema in patient data showed the percent volumes above the -950 HU threshold as 9.4% for the BONE reconstruction, 5.9% for the STANDARD reconstruction, and 4.7% for the BONE filtered images. Contrary to the practice of using standard resolution CT images for the quantitation of diffuse lung disease, these data demonstrate that a single sharp reconstruction (BONE/B46f) should be used for both the qualitative and quantitative evaluation of diffuse lung disease. The sharper reconstruction images, which are required for diagnostic interpretation, provide accurate CT numbers over the range of -1000 to +900 HU and preserve the fidelity of small structures in the reconstructed images. A filtered version of the sharper images can be accurately substituted for images reconstructed with smoother kernels for comparison to previously published results.

  15. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  16. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  17. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  18. Angiographic CT: in vitro comparison of different carotid artery stents-does stent orientation matter?

    PubMed

    Lettau, Michael; Bendszus, Martin; Hähnel, Stefan

    2013-06-01

    Our aim was to evaluate the in vitro visualization of different carotid artery stents on angiographic CT (ACT). Of particular interest was the influence of stent orientation to the angiography system by measurement of artificial lumen narrowing (ALN) caused by the stent material within the stented vessel segment to determine whether ACT can be used to detect restenosis within the stent. ACT appearances of 17 carotid artery stents of different designs and sizes (4.0 to 11.0 mm) were investigated in vitro. Stents were placed in different orientations to the angiography system. Standard algorithm image reconstruction and stent-optimized algorithm image reconstruction was performed. For each stent, ALN was calculated. With standard algorithm image reconstruction, ALN ranged from 19.0 to 43.6 %. With stent-optimized algorithm image reconstruction, ALN was significantly lower and ranged from 8.2 to 18.7 %. Stent struts could be visualized in all stents. Differences in ALN between the different stent orientations to the angiography system were not significant. ACT evaluation of vessel patency after stent placement is possible but is impaired by ALN. Stent orientation of the stents to the angiography system did not significantly influence ALN. Stent-optimized algorithm image reconstruction decreases ALN but further research is required to define the visibility of in-stent stenosis depending on image reconstruction.

  19. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  20. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-07

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  1. Time-of-flight PET image reconstruction using origin ensembles

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grzetic, S; Weldon, M; Noa, K

    Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of viewmore » so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.« less

  3. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  4. Validation of Ionosonde Electron Density Reconstruction Algorithms with IONOLAB-RAY in Central Europe

    NASA Astrophysics Data System (ADS)

    Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra

    2016-07-01

    Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  5. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, K; DiCostanzo, D; Gupta, N

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared againstmore » the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly reduced for more complex geometries.« less

  6. TU-A-12A-07: CT-Based Biomarkers to Characterize Lung Lesion: Effects of CT Dose, Slice Thickness and Reconstruction Algorithm Based Upon a Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, B; Tan, Y; Tsai, W

    2014-06-15

    Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six imagemore » series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to various degrees as our study has shown.« less

  7. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  8. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  9. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    PubMed

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  10. Impact of iterative metal artifact reduction on diagnostic image quality in patients with dental hardware.

    PubMed

    Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian

    2017-03-01

    Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.

  11. Performance comparison of two resolution modeling PET reconstruction algorithms in terms of physical figures of merit used in quantitative imaging.

    PubMed

    Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M

    2015-07-01

    Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Short-term reproducibility of computed tomography-based lung density measurements in alpha-1 antitrypsin deficiency and smokers with emphysema.

    PubMed

    Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A

    2004-07-01

    To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.

  13. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    NASA Astrophysics Data System (ADS)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  15. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-13

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  16. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  17. Reconstructing householder vectors from Tall-Skinny QR

    DOE PAGES

    Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...

    2015-08-05

    The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less

  18. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  19. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  20. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  1. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  2. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez-Cardona, Daniel; Nagle, Scott K.; Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiationmore » dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose levels. For FBP, the relative bias and the angular standard deviation of the measured WT increased steeply with decreasing radiation dose. Except for the smallest airway, MBIR enabled significant reduction in both the relative bias and angular standard deviation of the WT, particularly at low radiation dose levels; the SSQ was reduced by 50%–96% by using MBIR. The optimal reconstruction algorithm was found to be MBIR for the seven airways being assessed, and the combined use of MBIR and optimal kV–mAs selection resulted in a radiation dose reduction of 37%–83% compared with a reference scan protocol with a dose level of 1 mGy. Conclusions: The quantification accuracy of airway WT is strongly influenced by radiation dose and reconstruction algorithm. The MBIR algorithm potentially allows the desired WT quantification accuracy to be achieved with reduced radiation dose, which may enable a wider clinical use of MDCT for the assessment of airway WT, particularly for younger patients who may be more sensitive to exposures with ionizing radiation.« less

  3. SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyu, Q; Li, B; Southern Medical University, Guangzhou

    2015-06-15

    Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less

  4. A joint Richardson-Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data.

    PubMed

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-16

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  5. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    NASA Astrophysics Data System (ADS)

    Ströhl, Florian; Kaminski, Clemens F.

    2015-03-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.

  6. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    PubMed

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.

  7. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less

  8. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.

    PubMed

    Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo

    2015-12-07

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  9. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  10. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    NASA Astrophysics Data System (ADS)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  11. Energy reconstruction of hadrons in highly granular combined ECAL and HCAL systems

    NASA Astrophysics Data System (ADS)

    Israeli, Y.

    2018-05-01

    This paper discusses the hadronic energy reconstruction of two combined electromagnetic and hadronic calorimeter systems using physics prototypes of the CALICE collaboration: the silicon-tungsten electromagnetic calorimeter (Si-W ECAL) and the scintillator-SiPM based analog hadron calorimeter (AHCAL); and the scintillator-tungsten electromagnetic calorimeter (ScECAL) and the AHCAL. These systems were operated in hadron beams at CERN and FNAL, permitting the study of the performance in combined ECAL and HCAL systems. Two techniques for the energy reconstruction are used, a standard reconstruction based on calibrated sub-detector energy sums, and one based on a software compensation algorithm making use of the local energy density information provided by the high granularity of the detectors. The software compensation-based algorithm improves the hadronic energy resolution by up to 30% compared to the standard reconstruction. The combined system data show comparable energy resolutions to the one achieved for data with showers starting only in the AHCAL and therefore demonstrate the success of the inter-calibration of the different sub-systems, despite of their different geometries and different readout technologies.

  12. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  13. Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.

    PubMed

    Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai

    2005-10-01

    A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.

  14. Resolution recovery for Compton camera using origin ensemble algorithm.

    PubMed

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.

  15. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  16. Study of Track Irregularity Time Series Calibration and Variation Pattern at Unit Section

    PubMed Central

    Jia, Chaolong; Wei, Lili; Wang, Hanning; Yang, Jiulin

    2014-01-01

    Focusing on problems existing in track irregularity time series data quality, this paper first presents abnormal data identification, data offset correction algorithm, local outlier data identification, and noise cancellation algorithms. And then proposes track irregularity time series decomposition and reconstruction through the wavelet decomposition and reconstruction approach. Finally, the patterns and features of track irregularity standard deviation data sequence in unit sections are studied, and the changing trend of track irregularity time series is discovered and described. PMID:25435869

  17. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  18. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  19. Liquid argon TPC signal formation, signal processing and reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Baller, B.

    2017-07-01

    This document describes a reconstruction chain that was developed for the ArgoNeuT and MicroBooNE experiments at Fermilab. These experiments study accelerator neutrino interactions that occur in a Liquid Argon Time Projection Chamber. Reconstructing the properties of particles produced in these interactions benefits from the knowledge of the micro-physics processes that affect the creation and transport of ionization electrons to the readout system. A wire signal deconvolution technique was developed to convert wire signals to a standard form for hit reconstruction, to remove artifacts in the electronics chain and to remove coherent noise. A unique clustering algorithm reconstructs line-like trajectories and vertices in two dimensions which are then matched to create of 3D objects. These techniques and algorithms are available to all experiments that use the LArSoft suite of software.

  20. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.

    PubMed

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun

    2014-07-01

    4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.

  1. Progress in SPECT/CT imaging of prostate cancer.

    PubMed

    Seo, Youngho; Franc, Benjamin L; Hawkins, Randall A; Wong, Kenneth H; Hasegawa, Bruce H

    2006-08-01

    Prostate cancer is the most common type of cancer (other than skin cancer) among men in the United States. Although prostate cancer is one of the few cancers that grow so slowly that it may never threaten the lives of some patients, it can be lethal once metastasized. Indium-111 capromab pendetide (ProstaScint, Cytogen Corporation, Princeton, NJ) imaging is indicated for staging and recurrence detection of the disease, and is particularly useful to determine whether or not the disease has spread to distant metastatic sites. However, the interpretation of 111In-capromab pendetide is challenging without correlated structural information mostly because the radiopharmaceutical demonstrates nonspecific uptake in the normal vasculature, bowel, bone marrow, and the prostate gland. We developed an improved method of imaging and localizing 111In-Capromab pendetide using a SPECT/CT imaging system. The specific goals included: i) development and application of a novel iterative SPECT reconstruction algorithm that utilizes a priori information from coregistered CT; and ii) assessment of clinical impact of adding SPECT/CT for prostate cancer imaging with capromab pendetide utilizing the standard and novel reconstruction techniques. Patient imaging studies with capromab pendetide were performed from 1999 to 2004 using two different SPECT/CT scanners, a prototype SPECT/CT system and a commercial SPECT/CT system (Discovery VH, GE Healthcare, Waukesha, WI). SPECT projection data from both systems were reconstructed using an experimental iterative algorithm that compensates for both photon attenuation and collimator blurring. In addition, the data obtained from the commercial system were reconstructed with attenuation correction using an OSEM reconstruction supplied by the camera manufacturer for routine clinical interpretation. For 12 sets of patient data, SPECT images reconstructed using the experimental algorithm were interpreted separately and compared with interpretation of images obtained using the standard reconstruction technique. The experimental reconstruction algorithm improved spatial resolution, reduced streak artifacts, and yielded a better correlation with anatomic details of CT in comparison to conventional reconstruction methods (e.g., filtered back-projection or OSEM with attenuation correction only). Images produced with the experimental algorithm produced a subjective improvement in the confidence of interpretation for 11 of 12 studies. There were also changes in interpretations for 4 of 12 studies although the changes were not sufficient to alter prognosis or the patient treatment plan.

  2. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    PubMed

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  3. Comparing performance of many-core CPUs and GPUs for static and motion compensated reconstruction of C-arm CT data.

    PubMed

    Hofmann, Hannes G; Keck, Benjamin; Rohkohl, Christopher; Hornegger, Joachim

    2011-01-01

    Interventional reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. Hardware optimization is not an option but mandatory for interventional image processing and, in particular, for image reconstruction due to the high demands on performance. Several groups have published fast analytical 3-D reconstruction on highly parallel hardware such as GPUs to mitigate this issue. The authors show that the performance of modern CPU-based systems is in the same order as current GPUs for static 3-D reconstruction and outperforms them for a recent motion compensated (3-D+time) image reconstruction algorithm. This work investigates two algorithms: Static 3-D reconstruction as well as a recent motion compensated algorithm. The evaluation was performed using a standardized reconstruction benchmark, RABBITCT, to get comparable results and two additional clinical data sets. The authors demonstrate for a parametric B-spline motion estimation scheme that the derivative computation, which requires many write operations to memory, performs poorly on the GPU and can highly benefit from modern CPU architectures with large caches. Moreover, on a 32-core Intel Xeon server system, the authors achieve linear scaling with the number of cores used and reconstruction times almost in the same range as current GPUs. Algorithmic innovations in the field of motion compensated image reconstruction may lead to a shift back to CPUs in the future. For analytical 3-D reconstruction, the authors show that the gap between GPUs and CPUs became smaller. It can be performed in less than 20 s (on-the-fly) using a 32-core server.

  4. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm.

    PubMed

    Cui, Xiaoming; Li, Tao; Li, Xin; Zhou, Weihua

    2015-05-01

    The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0±8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. MREIT experiments with 200 µA injected currents: a feasibility study using two reconstruction algorithms, SMM and harmonic B(Z).

    PubMed

    Arpinar, V E; Hamamura, M J; Degirmenci, E; Muftuler, L T

    2012-07-07

    Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10-400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215-25) with Tikhonov regularization and the harmonic B(Z) proposed by Oh et al (2003 Magn. Reason. Med. 50 875-8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total acquisition time for 200 µA and 5 mA experiments was about 1 h and 8.5 min, respectively. The results demonstrate that conductivity imaging is possible at low currents using the suggested imaging parameters and reconstructing the images using iterative SMM with Tikhonov regularization, which appears to be more tolerant to noisy data than harmonic B(Z).

  6. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    PubMed

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  7. SU-F-P-56: On a New Approach to Reconstruct the Patient Dose From Phantom Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangtsson, E; Vries, W de

    Purpose: The development of complex radiation treatment schemes emphasizes the need for advanced QA analysis methods to ensure patient safety. One such tool is the Delta4 DVH Anatomy software, where the patient dose is reconstructed from phantom measurements. Deviations in the measured dose are transferred to the patient anatomy and their clinical impact is evaluated in situ. Results from the original algorithm revealed weaknesses that may introduce artefacts in the reconstructed dose. These can lead to false negatives or obscure the effects of minor dose deviations from delivery failures. Here, we will present results from a new patient dose reconstructionmore » algorithm. Methods: The main steps of the new algorithm are: (1) the dose delivered to a phantom is measured in a number of detector positions. (2) The measured dose is compared to an internally calculated dose distribution evaluated in said positions. The so-obtained dose difference is (3) used to calculate an energy fluence difference. This entity is (4) used as input to a patient dose correction calculation routine. Finally, the patient dose is reconstructed by adding said patient dose correction to the planned patient dose. The internal dose calculation in step (2) and (4) is based on the Pencil Beam algorithm. Results: The new patient dose reconstruction algorithm have been tested on a number of patients and the standard metrics dose deviation (DDev), distance-to-agreement (DTA) and Gamma index are improved when compared to the original algorithm. In a certain case the Gamma index (3%/3mm) increases from 72.9% to 96.6%. Conclusion: The patient dose reconstruction algorithm is improved. This leads to a reduction in non-physical artefacts in the reconstructed patient dose. As a consequence, the possibility to detect deviations in the dose that is delivered to the patient is improved. An increase in Gamma index for the PTV can be seen. The corresponding author is an employee of ScandiDos.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreyev, A.

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less

  9. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  10. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  11. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  12. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  13. Optical transillumination tomography with tolerance against refraction mismatch.

    PubMed

    Haidekker, Mark A

    2005-12-01

    Optical transillumination tomography (OT) is a laser-based imaging modality where ballistic photons are used for projection generation. Image reconstruction is therefore similar to X-ray computed tomography. This modality promises fast image acquisition, good resolution and contrast, and inexpensive instrumentation for imaging of weakly scattering objects, such as for example tissue-engineered constructs. In spite of its advantages, OT is not widely used. One reason is its sensitivity towards changes in material refractive index along the light path. Beam refraction artefacts cause areas of overestimated tissue density and blur geometric details. A spatial filter, introduced into the beam path to eliminate scattered photons, will also remove refracted photons from the projections. In the projections, zones affected by refraction can be detected by thresholding. By using algebraic reconstruction techniques (ART) in conjunction with suitable interpolation algorithms, reconstruction artefacts can be partly avoided. Reconstructions from a test image were performed. Standard filtered backprojection (FBP) showed a round mean square (RMS) deviation from the original image of 9.9. RMS deviation with refraction-tolerant ART reconstruction was 0.33 and 0.24, depending on the algorithm, compared to 0.57 (FBP) and 0.06 (ART) in a non-refracting case. In addition, modified ART reconstruction allowed detection of small geometric details that were invisible in standard reconstructions. Refraction-tolerant ART may be the key to eliminating one of the major challenges of OT.

  14. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    PubMed

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.

    PubMed

    Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning

    2017-03-29

    Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that M-AMST is able to achieve the best difference score in 3 datasets and get the second-best difference score in the other 2 datasets. We develop a pathway extraction method using a rotating sphere model based on coordinate transformation to improve the weight calculation approach in MST. The experimental results show that M-AMST utilizes the adapted minimum spanning tree algorithm which takes the shape information of neuron into account can achieve better neuron reconstructions. Moreover, M-AMST is able to get good neuron reconstruction in variety of image datasets.

  16. The reconstruction algorithm used for [68Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia.

    PubMed

    Krohn, Thomas; Birmes, Anita; Winz, Oliver H; Drude, Natascha I; Mottaghy, Felix M; Behrendt, Florian F; Verburg, Frederik A

    2017-04-01

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [ 68 Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [ 68 Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [ 68 Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information.

  17. Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo

    2015-11-01

    Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests

  18. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  19. Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  20. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  1. A Reconstruction Algorithm for Breast Cancer Imaging With Electrical Impedance Tomography in Mammography Geometry

    PubMed Central

    Kao, Tzu-Jen; Isaacson, David; Saulnier, Gary J.; Newell, Jonathan C.

    2009-01-01

    The conductivity and permittivity of breast tumors are known to differ significantly from those of normal breast tissues, and electrical impedance tomography (EIT) is being studied as a modality for breast cancer imaging to exploit these differences. At present, X-ray mammography is the primary standard imaging modality used for breast cancer screening in clinical practice, so it is desirable to study EIT in the geometry of mammography. This paper presents a forward model of a simplified mammography geometry and a reconstruction algorithm for breast tumor imaging using EIT techniques. The mammography geometry is modeled as a rectangular box with electrode arrays on the top and bottom planes. A forward model for the electrical impedance imaging problem is derived for a homogeneous conductivity distribution and is validated by experiment using a phantom tank. A reconstruction algorithm for breast tumor imaging based on a linearization approach and the proposed forward model is presented. It is found that the proposed reconstruction algorithm performs well in the phantom experiment, and that the locations of a 5-mm-cube metal target and a 6-mm-cube agar target could be recovered at a target depth of 15 mm using a 32 electrode system. PMID:17405377

  2. Multistatic synthetic aperture radar image formation.

    PubMed

    Krishnan, V; Swoboda, J; Yarman, C E; Yazici, B

    2010-05-01

    In this paper, we consider a multistatic synthetic aperture radar (SAR) imaging scenario where a swarm of airborne antennas, some of which are transmitting, receiving or both, are traversing arbitrary flight trajectories and transmitting arbitrary waveforms without any form of multiplexing. The received signal at each receiving antenna may be interfered by the scattered signal due to multiple transmitters and additive thermal noise at the receiver. In this scenario, standard bistatic SAR image reconstruction algorithms result in artifacts in reconstructed images due to these interferences. In this paper, we use microlocal analysis in a statistical setting to develop a filtered-backprojection (FBP) type analytic image formation method that suppresses artifacts due to interference while preserving the location and orientation of edges of the scene in the reconstructed image. Our FBP-type algorithm exploits the second-order statistics of the target and noise to suppress the artifacts due to interference in a mean-square sense. We present numerical simulations to demonstrate the performance of our multistatic SAR image formation algorithm with the FBP-type bistatic SAR image reconstruction algorithm. While we mainly focus on radar applications, our image formation method is also applicable to other problems arising in fields such as acoustic, geophysical and medical imaging.

  3. A neural network approach for image reconstruction in electron magnetic resonance tomography.

    PubMed

    Durairaj, D Christopher; Krishna, Murali C; Murugesan, Ramachandran

    2007-10-01

    An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.

  4. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    NASA Technical Reports Server (NTRS)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  5. Track finding in ATLAS using GPUs

    NASA Astrophysics Data System (ADS)

    Mattmann, J.; Schmitt, C.

    2012-12-01

    The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore significantly reduce the overall reconstruction time per event or allow for the usage of more sophisticated algorithms. In this paper the track finding in the ATLAS experiment will be used as an example on how the GPUs can be used in this context: the implementation on the GPU requires a change in the algorithmic flow to allow the code to work in the rather limited environment on the GPU in terms of memory, cache, and transfer speed from and to the GPU and to make use of the massive parallel computation. Both, the specific implementation of parts of the ATLAS track reconstruction chain and the performance improvements obtained will be discussed.

  6. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  7. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  8. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  9. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  10. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hao; Folkerts, Michael; Jiang, Steve B., E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is inventedmore » to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase. Conclusions: High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.« less

  11. Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc

    2007-02-01

    Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.

  12. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  13. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  14. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  15. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  16. Compressive sensing for sparse time-frequency representation of nonstationary signals in the presence of impulsive noise

    NASA Astrophysics Data System (ADS)

    Orović, Irena; Stanković, Srdjan; Amin, Moeness

    2013-05-01

    A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.

  17. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  18. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  19. MR-Consistent Simultaneous Reconstruction of Attenuation and Activity for Non-TOF PET/MR

    NASA Astrophysics Data System (ADS)

    Heußer, Thorsten; Rank, Christopher M.; Freitag, Martin T.; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Beyer, Thomas; Kachelrieß, Marc

    2016-10-01

    Attenuation correction (AC) is required for accurate quantification of the reconstructed activity distribution in positron emission tomography (PET). For simultaneous PET/magnetic resonance (MR), however, AC is challenging, since the MR images do not provide direct information on the attenuating properties of the underlying tissue. Standard MR-based AC does not account for the presence of bone and thus leads to an underestimation of the activity distribution. To improve quantification for non-time-of-flight PET/MR, we propose an algorithm which simultaneously reconstructs activity and attenuation distribution from the PET emission data using available MR images as anatomical prior information. The MR information is used to derive voxel-dependent expectations on the attenuation coefficients. The expectations are modeled using Gaussian-like probability functions. An iterative reconstruction scheme incorporating the prior information on the attenuation coefficients is used to update attenuation and activity distribution in an alternating manner. We tested and evaluated the proposed algorithm for simulated 3D PET data of the head and the pelvis region. Activity deviations were below 5% in soft tissue and lesions compared to the ground truth whereas standard MR-based AC resulted in activity underestimation values of up to 12%.

  20. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  1. Fat-constrained 18F-FDG PET reconstruction using Dixon MR imaging and the origin ensemble algorithm

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Heinzer, Susanne; Börnert, Peter; Renisch, Steffen; Prevrhal, Sven

    2015-03-01

    Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17+/-2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.

  2. Electrical capacitance volume tomography with high contrast dielectrics using a cuboid sensor geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    2007-05-01

    An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  3. Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  4. Image-guided filtering for improving photoacoustic tomographic image reconstruction.

    PubMed

    Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2018-06-01

    Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Assessment of Minimum 124I Activity Required in Uptake Measurements Before Radioiodine Therapy for Benign Thyroid Diseases.

    PubMed

    Gabler, Anja S; Kühnel, Christian; Winkens, Thomas; Freesmeyer, Martin

    2016-08-01

    This study aimed to assess a hypothetical minimum administered activity of (124)I required to achieve comparability between pretherapeutic radioiodine uptake (RAIU) measurements by (124)I PET/CT and by (131)I RAIU probe, the clinical standard. In addition, the impact of different reconstruction algorithms on (124)I RAIU and the evaluation of pixel noise as a parameter for image quality were investigated. Different scan durations were simulated by different reconstruction intervals of 600-s list-mode PET datasets (including 15 intervals up to 600 s and 5 different reconstruction algorithms: filtered-backprojection and 4 iterative techniques) acquired 30 h after administration of 1 MBq of (124)I. The Bland-Altman method was used to compare mean (124)I RAIU levels versus mean 3-MBq (131)I RAIU levels (clinical standard). The data of 37 patients with benign thyroid diseases were assessed. The impact of different reconstruction lengths on pixel noise was investigated for all 5 of the (124)I PET reconstruction algorithms. A hypothetical minimum activity was sought by means of a proportion equation, considering that the length of a reconstruction interval equates to a hypothetical activity. Mean (124)I RAIU and (131)I RAIU already showed high levels of agreement for reconstruction intervals of as short as 10 s, corresponding to a hypothetical minimum activity of 0.017 MBq of (124)I. The iterative algorithms proved generally superior to the filtered-backprojection algorithm. (124)I RAIU showed a trend toward higher levels than (131)I RAIU if the influence of retrosternal tissue was not considered, which was proven to be the cause of a slight overestimation by (124)I RAIU measurement. A hypothetical minimum activity of 0.5 MBq of (124)I obtained with iterative reconstruction appeared sufficient both visually and with regard to pixel noise. This study confirms the potential of (124)I RAIU measurement as an alternative method for (131)I RAIU measurement in benign thyroid disease and suggests that reducing the administered activity is an option. CT information is particularly important in cases of retrosternal expansion. The results are relevant because (124)I PET/CT allows additional diagnostic means, that is, the possibility of performing fusion imaging with ultrasound. (124)I PET/CT might be an alternative, especially when hybrid (123)I SPECT/CT is not available. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  6. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790

  7. High spatial resolution technique for SPECT using a fan-beam collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichihar, T.; Nambu, K.; Motomura, N.

    1993-08-01

    The physical characteristics of the collimator cause degradation of resolution with increasing distance from the collimator surface. A new convolutional backprojection algorithm has been derived for fanbeam SPECT data without rebinding into parallel beam geometry. The projections are filtered and then backprojected into the area within an isosceles triangle whose vertex is the focal point of the fan-beam and whose base is the fan-beam collimator face, and outside of the circle whose center is located midway between the focal point and the center of rotation and whose diameter is the distance between the focal point and the center of rotation.more » Consequently the backprojected area is close to the collimator surface. This algorithm has been implemented on a GCA-9300A SPECT system showing good results with both phantom and patient studies. The SPECT transaxial resolution was 4.6mm FWHM (reconstructed image matrix size of 256x256) at the center of SPECT FOV using UHR (ultra-high-resolution) fan beam collimators for brain study. Clinically, Tc-99m HMPAO and Tc-99m ECD brain data were reconstructed using this algorithm. The reconstruction results were compared with MRI images of the same slice position and showed significantly improved over results obtained with standard reconstruction algorithms.« less

  8. Quantitative comparison of OSEM and penalized likelihood image reconstruction using relative difference penalties for clinical PET

    NASA Astrophysics Data System (ADS)

    Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.

    2015-08-01

    Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.

  9. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  10. Sampling limits for electron tomography with sparsity-exploiting reconstructions.

    PubMed

    Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A

    2018-03-01

    Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  12. Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method

    NASA Astrophysics Data System (ADS)

    Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer

    2017-03-01

    Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.

  13. Impact of Reconstruction Algorithms on CT Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm Variability.

    PubMed

    Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo

    2016-01-01

    To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (p<0.05) among reconstruction algorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (p<0.013). Most of the radiomic features were significantly affected by the reconstruction algorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.

  14. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  15. Demosaicking algorithm for the Kodak-RGBW color filter array

    NASA Astrophysics Data System (ADS)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  16. GPU-based cone beam computed tomography.

    PubMed

    Noël, Peter B; Walczak, Alan M; Xu, Jinhui; Corso, Jason J; Hoffmann, Kenneth R; Schafer, Sebastian

    2010-06-01

    The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Physically corrected forward operators for induced emission tomography: a simulation study

    NASA Astrophysics Data System (ADS)

    Viganò, Nicola Roberto; Solé, Vicente Armando

    2018-03-01

    X-ray emission tomography techniques over non-radioactive materials allow one to investigate different and important aspects of the matter that are usually not addressable with the standard x-ray transmission tomography, such as density, chemical composition and crystallographic information. However, the quantitative reconstruction of these investigated properties is hindered by additional problems, including the self-attenuation of the emitted radiation. Work has been done in the past, especially concerning x-ray fluorescence tomography, but this has always focused on solving very specific problems. The novelty of this work resides in addressing the problem of induced emission tomography from a much wider perspective, introducing a unified discrete representation that can be used to modify existing algorithms to reconstruct the data of the different types of experiments. The direct outcome is a clear and easy mathematical description of the implementation details of such algorithms, despite small differences in geometry and other practical aspects, but also the possibility to express the reconstruction as a minimization problem, allowing the use of variational methods, and a more flexible modeling of the noise involved in the detection process. In addition, we look at the results of a few selected simulated data reconstructions that describe the effect of physical corrections like the self-attenuation, and the response to noise of the adapted reconstruction algorithms.

  18. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  19. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidtlein, CR; Beattie, B; Humm, J

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less

  20. Microsurgical reconstruction of the maxilla: Algorithm and concepts.

    PubMed

    Costa, Horácio; Zenha, Horácio; Sequeira, Hugo; Coelho, Gustavo; Gomes, Nuno; Pinto, Cristina; Martins, João; Santos, Diana; Andresen, Carolina

    2015-05-01

    The main purpose of this article is to highlight free tissue transfers as the first-choice method for three-dimensional (3D) maxillary reconstruction, particularly in providing enough bone for palate and maxillary arch reconstruction and consequently an implant-retained prosthesis. To achieve this, the myosseous free iliac crest was selected whenever possible as the first choice inside the reconstructive algorithm and free flap armamentarium. A new maxillectomy classification and algorithm reconstruction are proposed. Technical modifications and improvements accomplished over time are discussed, considering palate, dental implants and prosthesis, nasal sidewall, cranial base and dura, as well as recipient vessels. We present functional and aesthetic outcomes of the senior author's past 24-year experience (H. C.) with complex midface reconstructions. The authors report and analyse a 24-year experience with 57 midface defects in 54 patients (30 males and 24 females). A total of 57 maxillary defects - classified as Class I (limited maxillectomy) = 12, Class II (subtotal maxillectomy) = 15, Class III (total maxillectomy) = 19 and Class IV (orbitomaxillectomy) = 11 - were analysed regarding sex, age, tumour recurrence, free flap, reconstruction and necrosis. In addition, functional outcomes were evaluated regarding diet, speech, globe position and vision, while aesthetic outcomes were evaluated by patient and surgeon scores. A total of 52 free flaps were performed in 47 patients; three patients were operated upon twice; and two other patients needed two sequentially linked flow-through flaps. The free flap survival was 96% with two total flap losses (4%). The other seven patients were fitted with a soft tissue-retained obturator prosthesis. Microsurgical vascularised osteomyocutaneous free flaps are actually the gold standard for reconstruction of complex defects following maxillectomy. This algorithm is based on the anatomofunctional defect of the maxilla and it facilitates flap selection, which is a must. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. Photoplethysmograph signal reconstruction based on a novel hybrid motion artifact detection-reduction approach. Part I: Motion and noise artifact detection.

    PubMed

    Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak

    2014-11-01

    Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.

  2. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  3. Blob-enhanced reconstruction technique

    NASA Astrophysics Data System (ADS)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the signal-to-noise ratio in the reconstructed flow field and a higher value of the correlation factor in the velocity measurements with respect to the volume to which the particles are not replaced.

  4. Diffraction Correlation to Reconstruct Highly Strained Particles

    NASA Astrophysics Data System (ADS)

    Brown, Douglas; Harder, Ross; Clark, Jesse; Kim, J. W.; Kiefer, Boris; Fullerton, Eric; Shpyrko, Oleg; Fohtung, Edwin

    2015-03-01

    Through the use of coherent x-ray diffraction a three-dimensional diffraction pattern of a highly strained nano-crystal can be recorded in reciprocal space by a detector. Only the intensities are recorded, resulting in a loss of the complex phase. The recorded diffraction pattern therefore requires computational processing to reconstruct the density and complex distribution of the diffracted nano-crystal. For highly strained crystals, standard methods using HIO and ER algorithms are no longer sufficient to reconstruct the diffraction pattern. Our solution is to correlate the symmetry in reciprocal space to generate an a priori shape constraint to guide the computational reconstruction of the diffraction pattern. This approach has improved the ability to accurately reconstruct highly strained nano-crystals.

  5. FUX-Sim: Implementation of a fast universal simulation/reconstruction framework for X-ray systems.

    PubMed

    Abella, Monica; Serrano, Estefania; Garcia-Blas, Javier; García, Ines; de Molina, Claudia; Carretero, Jesus; Desco, Manuel

    2017-01-01

    The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms. In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures. A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms.

  6. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  7. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  8. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  9. Optimization of the spatial resolution for the GE discovery PET/CT 710 by using NEMA NU 2-2007 standards

    NASA Astrophysics Data System (ADS)

    Yoon, Hyun Jin; Jeong, Young Jin; Son, Hye Joo; Kang, Do-Young; Hyun, Kyung-Yae; Lee, Min-Kyung

    2015-01-01

    The spatial resolution in positron emission tomography (PET) is fundamentally limited by the geometry of the detector element, the positron's recombination range with electrons, the acollinearity of the positron, the crystal decoding error, the penetration into the detector ring, and the reconstruction algorithms. In this paper, optimized parameters are suggested to produce high-resolution PET images by using an iterative reconstruction algorithm. A phantom with three point sources structured with three capillary tubes was prepared with an axial extension of less than 1 mm and was filled with 18F-fluorodeoxyglucose (18F-FDG) with concentrations above 200 MBq/cc. The performance measures of all the PET images were acquired according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standards procedures. The parameters for the iterative reconstruction were adjusted around the values recommended by General Electric GE, and the optimized values of the spatial resolution and the full width at half maximum (FWHM) or the full width at tenth of maximum (FWTM) values were found for the best PET resolution. The axial and the transverse spatial resolutions, according to the filtered back-projection (FBP) at 1 cm off-axis, were 4.81 and 4.48 mm, respectively. The axial and the transaxial spatial resolutions at 10 cm off-axis were 5.63 mm and 5.08 mm, respectively, and the trans-axial resolution at 10 cm was evaluated as the average of the radial and the tangential measurements. The recommended optimized parameters of the spatial resolution according to the NEMA phantom for the number of subsets, the number of iterations, and the Gaussian post-filter are 12, 3, and 3 mm for the iterative reconstruction VUE Point HD without the SharpIR algorithm (HD), and 12, 12, and 5.2 mm with SharpIR (HD.S), respectively, according to the Advantage Workstation Volume Share 5 (AW4.6). The performance measurements for the GE Discovery PET/CT 710 using the NEMA NU 2-2007 standards from our results will be helpful in the quantitative analysis of PET scanner images. The spatial resolution was modified more by using an improved algorithm such as HD.S, than by using HD and FBP. The use of the optimized parameters for iterative reconstructions is strongly recommended for qualitative images from the GE Discovery PET/CT 710 scanner.

  10. Probabilistic surface reconstruction from multiple data sets: An example for the Australian Moho

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Salmon, M.; Kennett, B. L. N.; Sambridge, M.

    2012-10-01

    Interpolation of spatial data is a widely used technique across the Earth sciences. For example, the thickness of the crust can be estimated by different active and passive seismic source surveys, and seismologists reconstruct the topography of the Moho by interpolating these different estimates. Although much research has been done on improving the quantity and quality of observations, the interpolation algorithms utilized often remain standard linear regression schemes, with three main weaknesses: (1) the level of structure in the surface, or smoothness, has to be predefined by the user; (2) different classes of measurements with varying and often poorly constrained uncertainties are used together, and hence it is difficult to give appropriate weight to different data types with standard algorithms; (3) there is typically no simple way to propagate uncertainties in the data to uncertainty in the estimated surface. Hence the situation can be expressed by Mackenzie (2004): "We use fantastic telescopes, the best physical models, and the best computers. The weak link in this chain is interpreting our data using 100 year old mathematics". Here we use recent developments made in Bayesian statistics and apply them to the problem of surface reconstruction. We show how the reversible jump Markov chain Monte Carlo (rj-McMC) algorithm can be used to let the degree of structure in the surface be directly determined by the data. The solution is described in probabilistic terms, allowing uncertainties to be fully accounted for. The method is illustrated with an application to Moho depth reconstruction in Australia.

  11. A multistage selective weighting method for improved microwave breast tomography.

    PubMed

    Shahzad, Atif; O'Halloran, Martin; Jones, Edward; Glavin, Martin

    2016-12-01

    Microwave tomography has shown potential to successfully reconstruct the dielectric properties of the human breast, thereby providing an alternative to other imaging modalities used in breast imaging applications. Considering the costly forward solution and complex iterative algorithms, computational complexity becomes a major bottleneck in practical applications of microwave tomography. In addition, the natural tendency of microwave inversion algorithms to reward high contrast breast tissue boundaries, such as the skin-adipose interface, usually leads to a very slow reconstruction of the internal tissue structure of human breast. This paper presents a multistage selective weighting method to improve the reconstruction quality of breast dielectric properties and minimize the computational cost of microwave breast tomography. In the proposed two stage approach, the skin layer is approximated using scaled microwave measurements in the first pass of the inversion algorithm; a numerical skin model is then constructed based on the estimated skin layer and the assumed dielectric properties of the skin tissue. In the second stage of the algorithm, the skin model is used as a priori information to reconstruct the internal tissue structure of the breast using a set of temporal scaling functions. The proposed method is evaluated on anatomically accurate MRI-derived breast phantoms and a comparison with the standard single-stage technique is presented. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. Technical note: RabbitCT--an open platform for benchmarking 3D cone-beam reconstruction algorithms.

    PubMed

    Rohkohl, C; Keck, B; Hofmann, H G; Hornegger, J

    2009-09-01

    Fast 3D cone beam reconstruction is mandatory for many clinical workflows. For that reason, researchers and industry work hard on hardware-optimized 3D reconstruction. Backprojection is a major component of many reconstruction algorithms that require a projection of each voxel onto the projection data, including data interpolation, before updating the voxel value. This step is the bottleneck of most reconstruction algorithms and the focus of optimization in recent publications. A crucial limitation, however, of these publications is that the presented results are not comparable to each other. This is mainly due to variations in data acquisitions, preprocessing, and chosen geometries and the lack of a common publicly available test dataset. The authors provide such a standardized dataset that allows for substantial comparison of hardware accelerated backprojection methods. They developed an open platform RabbitCT (www.rabbitCT.com) for worldwide comparison in backprojection performance and ranking on different architectures using a specific high resolution C-arm CT dataset of a rabbit. This includes a sophisticated benchmark interface, a prototype implementation in C++, and image quality measures. At the time of writing, six backprojection implementations are already listed on the website. Optimizations include multithreading using Intel threading building blocks and OpenMP, vectorization using SSE, and computation on the GPU using CUDA 2.0. There is a need for objectively comparing backprojection implementations for reconstruction algorithms. RabbitCT aims to provide a solution to this problem by offering an open platform with fair chances for all participants. The authors are looking forward to a growing community and await feedback regarding future evaluations of novel software- and hardware-based acceleration schemes.

  13. Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.

    PubMed

    Bexelius, Tobias; Sohlberg, Antti

    2018-06-01

    Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levakhina, Y. M.; Mueller, J.; Buzug, T. M.

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position anmore » individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.« less

  15. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    PubMed

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  16. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  17. Efficacy and Clinical Utility of a High-Attenuation Object Artifact Reduction Algorithm in Flat-Detector Image Reconstruction Compared With Standard Image Reconstruction.

    PubMed

    Naehle, Claas P; Hechelhammer, Lukas; Richter, Heiko; Ryffel, Fabian; Wildermuth, Simon; Weber, Johannes

    To evaluate the effectiveness and clinical utility of a metal artifact reduction (MAR) image reconstruction algorithm for the reduction of high-attenuation object (HAO)-related image artifacts. Images were quantitatively evaluated for image noise (noiseSD and noiserange) and qualitatively for artifact severity, gray-white-matter delineation, and diagnostic confidence with conventional reconstruction and after applying a MAR algorithm. Metal artifact reduction reduces noiseSD and noiserange (median [interquartile range]) at the level of HAO in 1-cm distance compared with conventional reconstruction (noiseSD: 60.0 [71.4] vs 12.8 [16.1] and noiserange: 262.0 [236.8] vs 72.0 [28.3]; P < 0.0001). Artifact severity (reader 1 [mean ± SD]: 1.1 ± 0.6 vs 2.4 ± 0.5, reader 2: 0.8 ± 0.6 vs 2.0 ± 0.4) at level of HAO and diagnostic confidence (reader 1: 1.6 ± 0.7 vs 2.6 ± 0.5, reader 2: 1.0 ± 0.6 vs 2.3 ± 0.7) significantly improved with MAR (P < 0.0001). Metal artifact reduction did not affect gray-white-matter delineation. Metal artifact reduction effectively reduces image artifacts caused by HAO and significantly improves diagnostic confidence without worsening gray-white-matter delineation.

  18. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  19. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  20. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  1. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  2. Regridding reconstruction algorithm for real-time tomographic imaging

    PubMed Central

    Marone, F.; Stampanoni, M.

    2012-01-01

    Sub-second temporal-resolution tomographic microscopy is becoming a reality at third-generation synchrotron sources. Efficient data handling and post-processing is, however, difficult when the data rates are close to 10 GB s−1. This bottleneck still hinders exploitation of the full potential inherent in the ultrafast acquisition speed. In this paper the fast reconstruction algorithm gridrec, highly optimized for conventional CPU technology, is presented. It is shown that gridrec is a valuable alternative to standard filtered back-projection routines, despite being based on the Fourier transform method. In fact, the regridding procedure used for resampling the Fourier space from polar to Cartesian coordinates couples excellent performance with negligible accuracy degradation. The stronger dependence of the observed signal-to-noise ratio for gridrec reconstructions on the number of angular views makes the presented algorithm even superior to filtered back-projection when the tomographic problem is well sampled. Gridrec not only guarantees high-quality results but it provides up to 20-fold performance increase, making real-time monitoring of the sub-second acquisition process a reality. PMID:23093766

  3. Noise and signal properties in PSF-based fully 3D PET image reconstruction: an experimental evaluation

    NASA Astrophysics Data System (ADS)

    Tong, S.; Alessio, A. M.; Kinahan, P. E.

    2010-03-01

    The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in contrast recovery at matched ensemble noise levels and 20% improvement of quantitation SNR in unfiltered data. In addition, the relations between different metrics are studied. A linear correlation is observed between background variability and ensemble noise for all different combinations of reconstruction methods and parameters, suggesting that background variability is a reasonable surrogate for ensemble noise when multiple realizations of scans are not available.

  4. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  5. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  6. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography

    PubMed Central

    Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A

    2012-01-01

    Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062

  7. Fast direct fourier reconstruction of radial and PROPELLER MRI data using the chirp transform algorithm on graphics hardware.

    PubMed

    Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan

    2013-10-01

    To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.

  8. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  9. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  10. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  11. Simulation results for a finite element-based cumulative reconstructor

    NASA Astrophysics Data System (ADS)

    Wagner, Roland; Neubauer, Andreas; Ramlau, Ronny

    2017-10-01

    Modern ground-based telescopes rely on adaptive optics (AO) systems for the compensation of image degradation caused by atmospheric turbulences. Within an AO system, measurements of incoming light from guide stars are used to adjust deformable mirror(s) in real time that correct for atmospheric distortions. The incoming wavefront has to be derived from sensor measurements, and this intermediate result is then translated into the shape(s) of the deformable mirror(s). Rapid changes of the atmosphere lead to the need for fast wavefront reconstruction algorithms. We review a fast matrix-free algorithm that was developed by Neubauer to reconstruct the incoming wavefront from Shack-Hartmann measurements based on a finite element discretization of the telescope aperture. The method is enhanced by a domain decomposition ansatz. We show that this algorithm reaches the quality of standard approaches in end-to-end simulation while at the same time maintaining the speed of recently introduced solvers with linear order speed.

  12. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    NASA Astrophysics Data System (ADS)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min

  13. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    PubMed

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.

  14. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Justin, E-mail: justin.solomon@duke.edu; Samei, Ehsan

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based onmore » a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was reduced by an average of 60% in SAFIRE images compared to FBP. However, for edge pixels, noise magnitude ranged from 20% higher to 40% lower in SAFIRE images compared to FBP. SAFIRE images of the lung phantom exhibited distinct regions with varying noise texture (i.e., noise autocorrelation/power spectra). Conclusions: Quantum noise properties observed in uniform phantoms may not be representative of those in actual patients for nonlinear reconstruction algorithms. Anatomical texture should be considered when evaluating the performance of CT systems that use such nonlinear algorithms.« less

  15. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data.

    PubMed

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.

  16. Markov prior-based block-matching algorithm for superdimension reconstruction of porous media

    NASA Astrophysics Data System (ADS)

    Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong

    2018-04-01

    A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.

  17. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  18. Algorithmic problems of nontransitive (SSB) utilities

    NASA Technical Reports Server (NTRS)

    Kosheleva, O. M.; Kreinovich, V. YA.

    1991-01-01

    The standard utility theory is based on several natural axioms including transitivity of preference; however, real preference is often not transitive. To describe such preferences, Fishburn (1988) introduced a new formalism (SSB-utilities), in which preference is described by a skew-symmetric function F:M x M - R, where M is the set of all alternatives. He also showed that it is in principle possible to reconstruct this function F by asking the person to compare different alternatives and lotteries. In the present paper we propose a new algorithm for reconstructing F that is asymptotically optimal in the sense that the number of binary (yes-no) questions that one has to ask to determine the values of F with given precision is of minimal possible order.

  19. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  20. Scout-view Assisted Interior Micro-CT

    PubMed Central

    Sen Sharma, Kriti; Holzner, Christian; Vasilescu, Dragoş M.; Jin, Xin; Narayanan, Shree; Agah, Masoud; Hoffman, Eric A.; Yu, Hengyong; Wang, Ge

    2013-01-01

    Micro computed tomography (micro-CT) is a widely-used imaging technique. A challenge of micro-CT is to quantitatively reconstruct a sample larger than the field-of-view (FOV) of the detector. This scenario is characterized by truncated projections and associated image artifacts. However, for such truncated scans, a low resolution scout scan with an increased FOV is frequently acquired so as to position the sample properly. This study shows that the otherwise discarded scout scans can provide sufficient additional information to uniquely and stably reconstruct the interior region of interest. Two interior reconstruction methods are designed to utilize the multi-resolution data without a significant computational overhead. While most previous studies used numerically truncated global projections as interior data, this study uses truly hybrid scans where global and interior scans were carried out at different resolutions. Additionally, owing to the lack of standard interior micro-CT phantoms, we designed and fabricated novel interior micro-CT phantoms for this study to provide means of validation for our algorithms. Finally, two characteristic samples from separate studies were scanned to show the effect of our reconstructions. The presented methods show significant improvements over existing reconstruction algorithms. PMID:23732478

  1. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.

    PubMed

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-05-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction.

  2. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  3. Metal artifact reduction software used with abdominopelvic dual-energy CT of patients with metal hip prostheses: assessment of image quality and clinical feasibility.

    PubMed

    Han, Seung Chol; Chung, Yong Eun; Lee, Young Han; Park, Kwan Kyu; Kim, Myeong Jin; Kim, Ki Whang

    2014-10-01

    The objective of our study was to determine the feasibility of using Metal Artifact Reduction (MAR) software for abdominopelvic dual-energy CT in patients with metal hip prostheses. This retrospective study included 33 patients (male-female ratio, 19:14; mean age, 63.7 years) who received total hip replacements and 20 patients who did not have metal prostheses as the control group. All of the patients underwent dual-energy CT. The quality of the images reconstructed using the MAR algorithm and of those reconstructed using the standard reconstruction was evaluated in terms of the visibility of the bladder wall, pelvic sidewall, rectal shelf, and bone-prosthesis interface and the overall diagnostic image quality with a 4-point scale. The mean and SD attenuation values in Hounsfield units were measured in the bladder, pelvic sidewall, and rectal shelf. For validation of the MAR interpolation algorithm, pelvis phantoms with small bladder "lesions" and metal hip prostheses were made, and images of the phantoms both with and without MAR reconstruction were evaluated. Image quality was significantly better with MAR reconstruction than without at all sites except the rectal shelf, where the image quality either had not changed or had worsened after MAR reconstruction. The mean attenuation value was changed after MAR reconstruction to its original expected value at the pelvic sidewall (p < 0.001) and inside the bladder (p < 0.001). The SD attenuation value was significantly decreased after MAR reconstruction at the pelvic sidewall (p = 0.019) but did not show significant differences at the bladder (p = 0.173) or rectal shelf (p = 0.478). In the phantom study, all lesions obscured by metal artifacts on the standard reconstruction images were visualized after MAR reconstruction; however, new artifacts had developed in other parts of the MAR reconstruction images. The use of MAR software with dual-energy CT decreases metal artifacts and increases diagnostic confidence in the assessment of the pelvic cavity but also introduces new artifacts that can obscure pelvic structures.

  4. CT coronary angiography: impact of adapted statistical iterative reconstruction (ASIR) on coronary stenosis and plaque composition analysis.

    PubMed

    Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-03-01

    To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.

  5. Reconstruction of reflectance data using an interpolation technique.

    PubMed

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  6. Search for the Standard Model Higgs Boson Decaying to Bottom Quarks in Proton-Proton Collisions at 8 TeV

    NASA Astrophysics Data System (ADS)

    Silkworth, Inga

    A search for the standard model Higgs boson (H) decaying to bottom quarks and produced in association with a Z boson is presented. The search uses 8 TeV center-of-mass energy proton-proton collision data recorded by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to integrated luminosity of 19.0 inverse femtobarns. The Z boson is reconstructed using two oppositely charged leptons -- either electrons or muons. Two techniques for reconstructing the Higgs candidate are discussed: the standard method using two jets reconstructed with the anti-kt algorithm and a second technique using jet substructure that was developed for highly boosted massive particles. Upper limits, at the 95% confidence level, on the production cross section times the branching ratio, with respect to the standard model expectations, are derived for a Higgs boson in a mass range 110-135 GeV. The results from the ZH channel are combined with five other channels, and an excess of events is observed consistent with the standard model Higgs boson with a local significance of 2.1 standard deviations at 125 GeV.

  7. CT cardiac imaging: evolution from 2D to 3D backprojection

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke

    2004-04-01

    The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.

  8. Image reconstruction through thin scattering media by simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua

    2018-07-01

    An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.

  9. Clinical evaluation of reducing acquisition time on single-photon emission computed tomography image quality using proprietary resolution recovery software.

    PubMed

    Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B

    2013-11-01

    A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.

  10. Reconstructing cone-beam CT with spatially varying qualities for adaptive radiotherapy: a proof-of-principle study.

    PubMed

    Lu, Wenting; Yan, Hao; Gu, Xuejun; Tian, Zhen; Luo, Ouyang; Yang, Liu; Zhou, Linghong; Cervino, Laura; Wang, Jing; Jiang, Steve; Jia, Xun

    2014-10-21

    With the aim of maximally reducing imaging dose while meeting requirements for adaptive radiation therapy (ART), we propose in this paper a new cone beam CT (CBCT) acquisition and reconstruction method that delivers images with a low noise level inside a region of interest (ROI) and a relatively high noise level outside the ROI. The acquired projection images include two groups: densely sampled projections at a low exposure with a large field of view (FOV) and sparsely sampled projections at a high exposure with a small FOV corresponding to the ROI. A new algorithm combining the conventional filtered back-projection algorithm and the tight-frame iterative reconstruction algorithm is also designed to reconstruct the CBCT based on these projection data. We have validated our method on a simulated head-and-neck (HN) patient case, a semi-real experiment conducted on a HN cancer patient under a full-fan scan mode, as well as a Catphan phantom under a half-fan scan mode. Relative root-mean-square errors (RRMSEs) of less than 3% for the entire image and ~1% within the ROI compared to the ground truth have been observed. These numbers demonstrate the ability of our proposed method to reconstruct high-quality images inside the ROI. As for the part outside ROI, although the images are relatively noisy, it can still provide sufficient information for radiation dose calculations in ART. Dose distributions calculated on our CBCT image and on a standard CBCT image are in agreement, with a mean relative difference of 0.082% inside the ROI and 0.038% outside the ROI. Compared with the standard clinical CBCT scheme, an imaging dose reduction of approximately 3-6 times inside the ROI was achieved, as well as an 8 times outside the ROI. Regarding computational efficiency, it takes 1-3 min to reconstruct a CBCT image depending on the number of projections used. These results indicate that the proposed method has the potential for application in ART.

  11. Spectral Retrieval of Latent Heating Profiles from TRMM PR data. Part 3; Moistening Estimates over Tropical Ocean Regions

    NASA Technical Reports Server (NTRS)

    Shige, S.; Takayabu, Y.; Tao, W.-K.

    2007-01-01

    The global hydrological cycle is central to the Earth's climate system, with rainfall and the physics of precipitation formation acting as the key links in the cycle. Two-thirds of global rainfall occurs in the tropics with the associated latent heating (LH) accounting for threefourths of the total heat energy available to the Earth's atmosphere. In the last decade, it has been established that standard products of LH from satellite measurements, particularly TRMM measurements, would be a valuable resource for scientific research and applications. Such products would enable new insights and investigations concerning the complexities of convection system life cycles, the diabatic heating controls and feedbacks related to rne-sosynoptic circulations and their forecasting, the relationship of tropical patterns of LH to the global circulation and climate, and strategies for improving cloud parameterizations In environmental prediction models. However, the LH and water vapor profile or budget (called the apparent moisture sink, or Q2) is closely related. This paper presented the development of an algorithm for retrieving Q2 using 'TRMM precipitation radar. Since there is no direct measurement of LH and Q2, the validation of algorithm usually applies a method called consistency check. Consistency checking involving Cloud Resolving Model (CRM)-generated LH and 42 profiles and algorithm-reconstructed is a useful step in evaluating the performance of a given algorithm. In this process, the CRM simulation of a time-dependent precipitation process (multiple-day time series) is used to obtain the required input parameters for a given algorithm. The algorithm is then used to "econsti-LKth"e heating and moisture profiles that the CRM simulation originally produced, and finally both sets of conformal estimates (model and algorithm) are compared each other. The results indicate that discrepancies between the reconstructed and CM-simulated profiles for Q2, especially at low levels, are larger than those for latent heat. Larger discrepancies in Q2 at low levels are due to moistening for non-precipitating region that algorithm cannot reconstruct. Nevertheless, the algorithm-reconstructed total Q2 profiles are in good agreement with the CRM-simulated ones.

  12. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  13. Direct integration of the inverse Radon equation for X-ray computed tomography.

    PubMed

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  14. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  15. Anisotropic field-of-view shapes for improved PROPELLER imaging☆

    PubMed Central

    Larson, Peder E.Z.; Lustig, Michael S.; Nishimura, Dwight G.

    2010-01-01

    The Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) method for magnetic resonance imaging data acquisition and reconstruction has the highly desirable property of being able to correct for motion during the scan, making it especially useful for imaging pediatric or uncooperative patients and diffusion imaging. This method nominally supports a circular field of view (FOV), but tailoring the FOV for noncircular shapes results in more efficient, shorter scans. This article presents new algorithms for tailoring PROPELLER acquisitions to the desired FOV shape and size that are flexible and precise. The FOV design also allows for rotational motion which provides better motion correction and reduced aliasing artifacts. Some possible FOV shapes demonstrated are ellipses, ovals and rectangles, and any convex, pi-symmetric shape can be designed. Standard PROPELLER reconstruction is used with minor modifications, and results with simulated motion presented confirm the effectiveness of the motion correction with these modified FOV shapes. These new acquisition design algorithms are simple and fast enough to be computed for each individual scan. Also presented are algorithms for further scan time reductions in PROPELLER echo-planar imaging (EPI) acquisitions by varying the sample spacing in two directions within each blade. PMID:18818039

  16. The algorithm of central axis in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang

    2017-09-01

    Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.

  17. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.

    PubMed

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-07

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  18. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies

    PubMed Central

    Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong

    2017-01-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843

  19. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies

    NASA Astrophysics Data System (ADS)

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-01

    Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  20. A reconstruction algorithm for helical CT imaging on PI-planes.

    PubMed

    Liang, Hongzhu; Zhang, Cishen; Yan, Ming

    2006-01-01

    In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.

  1. Iterative metal artifact reduction: evaluation and optimization of technique.

    PubMed

    Subhas, Naveen; Primak, Andrew N; Obuchowski, Nancy A; Gupta, Amit; Polster, Joshua M; Krauss, Andreas; Iannotti, Joseph P

    2014-12-01

    Iterative metal artifact reduction (IMAR) is a sinogram inpainting technique that incorporates high-frequency data from standard weighted filtered back projection (WFBP) reconstructions to reduce metal artifact on computed tomography (CT). This study was designed to compare the image quality of IMAR and WFBP in total shoulder arthroplasties (TSA); determine the optimal amount of WFBP high-frequency data needed for IMAR; and compare image quality of the standard 3D technique with that of a faster 2D technique. Eight patients with nine TSA underwent CT with standardized parameters: 140 kVp, 300 mAs, 0.6 mm collimation and slice thickness, and B30 kernel. WFBP, three 3D IMAR algorithms with different amounts of WFBP high-frequency data (IMARlo, lowest; IMARmod, moderate; IMARhi, highest), and one 2D IMAR algorithm were reconstructed. Differences in attenuation near hardware and away from hardware were measured and compared using repeated measures ANOVA. Five readers independently graded image quality; scores were compared using Friedman's test. Attenuation differences were smaller with all 3D IMAR techniques than with WFBP (p < 0.0063). With increasing high-frequency data, the attenuation difference increased slightly (differences not statistically significant). All readers ranked IMARmod and IMARhi more favorably than WFBP (p < 0.05), with IMARmod ranked highest for most structures. The attenuation difference was slightly higher with 2D than with 3D IMAR, with no significant reader preference for 3D over 2D. IMAR significantly decreases metal artifact compared to WFBP both objectively and subjectively in TSA. The incorporation of a moderate amount of WFBP high-frequency data and use of a 2D reconstruction technique optimize image quality and allow for relatively short reconstruction times.

  2. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  3. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    PubMed Central

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-01-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968

  4. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    NASA Astrophysics Data System (ADS)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  5. Theory and algorithms for image reconstruction on chords and within regions of interest

    NASA Astrophysics Data System (ADS)

    Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.

    2005-11-01

    We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.

  6. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    PubMed

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  7. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  8. Rehabilitation after anterior cruciate ligament reconstruction: criteria-based progression through the return-to-sport phase.

    PubMed

    Myer, Gregory D; Paterno, Mark V; Ford, Kevin R; Quatman, Carmen E; Hewett, Timothy E

    2006-06-01

    Rehabilitation following anterior cruciate ligament (ACL) reconstruction has undergone a relatively rapid and global evolution over the past 25 years. However, there is an absence of standardized, objective criteria to accurately assess an athlete's ability to progress through the end stages of rehabilitation and safe return to sport. Return-to-sport rehabilitation, progressed by quantitatively measured functional goals, may improve the athlete's integration back into sport participation. The purpose of the following clinical commentary is to introduce an example of a criteria-driven algorithm for progression through return-to-sport rehabilitation following ACL reconstruction. Our criteria-based protocol incorporates a dynamic assessment of baseline limb strength, patient-reported outcomes, functional knee stability, bilateral limb symmetry with functional tasks, postural control, power, endurance, agility, and technique with sport-specific tasks. Although this algorithm has limitations, it serves as a foundation to expand future evidence-based evaluation and to foster critical investigation into the development of objective measures to accurately determine readiness to safely return to sport following injury.

  9. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  10. Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data

    NASA Astrophysics Data System (ADS)

    Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.

    2013-01-01

    A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

  11. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  12. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    PubMed

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  13. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  14. SU-F-BRCD-09: Total Variation (TV) Based Fast Convergent Iterative CBCT Reconstruction with GPU Acceleration.

    PubMed

    Xu, Q; Yang, D; Tan, J; Anastasio, M

    2012-06-01

    To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.

  15. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    PubMed

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  16. Fully automated reconstruction of three-dimensional vascular tree structures from two orthogonal views using computational algorithms and productionrules

    NASA Astrophysics Data System (ADS)

    Liu, Iching; Sun, Ying

    1992-10-01

    A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.

  17. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  18. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  19. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  20. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-07

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  1. Surgical therapy of vulvar cancer: how to choose the correct reconstruction?

    PubMed Central

    2016-01-01

    Objective To create a comprehensive algorithmic approach to reconstruction after vulvar cancer ablative surgery, which includes both traditional and perforator flaps, evaluating anatomical subunits and shape of the defect. Methods We retrospectively reviewed 80 cases of reconstruction after vulvar cancer ablative surgery, performed between June 2006 and January 2016, transferring 101 flaps. We registered the possibility to achieve the complete wound closure, even in presence of very complex defects, and the postoperative complications. On the basis of these experience, analyzing the choices made and considering the complications, we developed an algorithm to help with the selection of the flap in vulvoperineal reconstruction after oncologic ablative surgery for vulvar cancer. Results We employed eight types of different flaps, including 54 traditional fasciocutaneous V-Y flaps, 23 rectus abdominis myocutaneous flaps, 11 anterolateral thigh flaps, three V-Y gracilis myocutaneous flaps, three free style perforators V-Y flaps from the inner thigh, two Limberg flaps, two lotus flaps, two deep inferior epigastric artery perforator flap, and one superficial circumflex iliac artery perforator flap. The structures most frequently involved in resection were vulva, perineum, mons pubis, groins, vagina, urethra and, more rarely, rectum, bladder, and lower abdominal wall. Conclusion The algorithm we implemented can be a useful tool to help flap selection. The key points in the decision-making process are: anatomical subunits to be covered, overall shape and symmetry of the defect and some patient features such as skin laxity or previous radiotherapy. Perforator flaps, when feasible, must be considered standard in vulvoperineal reconstruction, although in some cases traditional flaps remain the best choice. PMID:27550406

  2. Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.

    PubMed

    Tang, Shaojie; Tang, Xiangyang

    2016-09-01

    The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.

  3. Exploiting the wavelet structure in compressed sensing MRI.

    PubMed

    Chen, Chen; Huang, Junzhou

    2014-12-01

    Sparsity has been widely utilized in magnetic resonance imaging (MRI) to reduce k-space sampling. According to structured sparsity theories, fewer measurements are required for tree sparse data than the data only with standard sparsity. Intuitively, more accurate image reconstruction can be achieved with the same number of measurements by exploiting the wavelet tree structure in MRI. A novel algorithm is proposed in this article to reconstruct MR images from undersampled k-space data. In contrast to conventional compressed sensing MRI (CS-MRI) that only relies on the sparsity of MR images in wavelet or gradient domain, we exploit the wavelet tree structure to improve CS-MRI. This tree-based CS-MRI problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved by an iterative scheme. Simulations and in vivo experiments demonstrate the significant improvement of the proposed method compared to conventional CS-MRI algorithms, and the feasibleness on MR data compared to existing tree-based imaging algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. On the Accuracy of Language Trees

    PubMed Central

    Pompei, Simone; Loreto, Vittorio; Tria, Francesca

    2011-01-01

    Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034

  5. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    PubMed

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p < 0.01 for all). All iMAR reconstructed images showed significantly lower artefacts (p < 0.01) compared with the WFPB while there was no significant difference between the iMAR algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p < 0.01). All three iMAR algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  6. Robust sparse image reconstruction of radio interferometric observations with PURIFY

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; McEwen, Jason D.; d'Avezac, Mayeul; Carrillo, Rafael E.; Onose, Alexandru; Wiaux, Yves

    2018-01-01

    Next-generation radio interferometers, such as the Square Kilometre Array, will revolutionize our understanding of the Universe through their unprecedented sensitivity and resolution. However, to realize these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed interferometric images that are limited in quality and scalability for big data. In this work, we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions while providing a computational saving and an analytic form. Secondly, we apply PURIFY to real interferometric observations from the Very Large Array and the Australia Telescope Compact Array and find that images recovered by PURIFY are of higher quality than those recovered by CLEAN. Thirdly, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.

  7. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  8. Optimizing 4DCBCT projection allocation to respiratory bins.

    PubMed

    O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J

    2014-10-07

    4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.

  9. Low Statistics Reconstruction of the Compton Camera Point Spread Function in 3D Prompt-γ Imaging of Ion Beam Therapy

    NASA Astrophysics Data System (ADS)

    Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy

    2013-10-01

    The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.

  10. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  11. Algorithms to eliminate the influence of non-uniform intensity distributions on wavefront reconstruction by quadri-wave lateral shearing interferometers

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing

    2017-11-01

    In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.

  12. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    NASA Astrophysics Data System (ADS)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.

  13. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  14. Accurate low-dose iterative CT reconstruction from few projections by Generalized Anisotropic Total Variation minimization for industrial CT.

    PubMed

    Debatin, Maurice; Hesser, Jürgen

    2015-01-01

    Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.

  15. Metal implants on CT: comparison of iterative reconstruction algorithms for reduction of metal artifacts with single energy and spectral CT scanning in a phantom model.

    PubMed

    Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R

    2017-03-01

    To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.

  16. Optimization-based reconstruction for reduction of CBCT artifact in IGRT

    NASA Astrophysics Data System (ADS)

    Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan

    2016-04-01

    Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.

  17. A density based algorithm to detect cavities and holes from planar points

    NASA Astrophysics Data System (ADS)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  18. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  19. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  20. Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering

    PubMed Central

    Tang, Shaojie; Tang, Xiangyang

    2016-01-01

    Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512

  1. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  2. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  3. Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Leuenberger, Markus C.

    2018-06-01

    Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess) extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc) datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values (targets) to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc) by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0.51 K for temperature (2σ, respectively). In addition to Holocene temperature reconstructions, the fitting approach can also be used for glacial temperature reconstructions. This is shown by fitting of the North Greenland Ice Core Project (NGRIP) δ15N data for two Dansgaard-Oeschger events using the presented approach, leading to results comparable to other studies.

  4. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  5. Mutual information estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.

  6. High-resolution computed tomography of single breast cancer microcalcifications in vivo.

    PubMed

    Inoue, Kazumasa; Liu, Fangbing; Hoppin, Jack; Lunsford, Elaine P; Lackas, Christian; Hesterman, Jacob; Lenkinski, Robert E; Fujii, Hirofumi; Frangioni, John V

    2011-08-01

    Microcalcification is a hallmark of breast cancer and a key diagnostic feature for mammography. We recently described the first robust animal model of breast cancer microcalcification. In this study, we hypothesized that high-resolution computed tomography (CT) could potentially detect the genesis of a single microcalcification in vivo and quantify its growth over time. Using a commercial CT scanner, we systematically optimized acquisition and reconstruction parameters. Two ray-tracing image reconstruction algorithms were tested: a voxel-driven "fast" cone beam algorithm (FCBA) and a detector-driven "exact" cone beam algorithm (ECBA). By optimizing acquisition and reconstruction parameters, we were able to achieve a resolution of 104 μm full width at half-maximum (FWHM). At an optimal detector sampling frequency, the ECBA provided a 28 μm (21%) FWHM improvement in resolution over the FCBA. In vitro, we were able to image a single 300 μm × 100 μm hydroxyapatite crystal. In a syngeneic rat model of breast cancer, we were able to detect the genesis of a single microcalcification in vivo and follow its growth longitudinally over weeks. Taken together, this study provides an in vivo "gold standard" for the development of calcification-specific contrast agents and a model system for studying the mechanism of breast cancer microcalcification.

  7. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  8. UV Reconstruction Algorithm And Diurnal Cycle Variability

    NASA Astrophysics Data System (ADS)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  9. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  10. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  11. Experimental scheme and restoration algorithm of block compression sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  12. The influence of different signal-to-background ratios on spatial resolution and F18-FDG-PET quantification using point spread function and time-of-flight reconstruction.

    PubMed

    Rogasch, Julian Mm; Hofheinz, Frank; Lougovski, Alexandr; Furth, Christian; Ruf, Juri; Großer, Oliver S; Mohnike, Konrad; Hass, Peter; Walke, Mathias; Amthauer, Holger; Steffen, Ingo G

    2014-12-01

    F18-fluorodeoxyglucose positron-emission tomography (FDG-PET) reconstruction algorithms can have substantial influence on quantitative image data used, e.g., for therapy planning or monitoring in oncology. We analyzed radial activity concentration profiles of differently reconstructed FDG-PET images to determine the influence of varying signal-to-background ratios (SBRs) on the respective spatial resolution, activity concentration distribution, and quantification (standardized uptake value [SUV], metabolic tumor volume [MTV]). Measurements were performed on a Siemens Biograph mCT 64 using a cylindrical phantom containing four spheres (diameter, 30 to 70 mm) filled with F18-FDG applying three SBRs (SBR1, 16:1; SBR2, 6:1; SBR3, 2:1). Images were reconstructed employing six algorithms (filtered backprojection [FBP], FBP + time-of-flight analysis [FBP + TOF], 3D-ordered subset expectation maximization [3D-OSEM], 3D-OSEM + TOF, point spread function [PSF], PSF + TOF). Spatial resolution was determined by fitting the convolution of the object geometry with a Gaussian point spread function to radial activity concentration profiles. MTV delineation was performed using fixed thresholds and semiautomatic background-adapted thresholding (ROVER, ABX, Radeberg, Germany). The pairwise Wilcoxon test revealed significantly higher spatial resolutions for PSF + TOF (up to 4.0 mm) compared to PSF, FBP, FBP + TOF, 3D-OSEM, and 3D-OSEM + TOF at all SBRs (each P < 0.05) with the highest differences for SBR1 decreasing to the lowest for SBR3. Edge elevations in radial activity profiles (Gibbs artifacts) were highest for PSF and PSF + TOF declining with decreasing SBR (PSF + TOF largest sphere; SBR1, 6.3%; SBR3, 2.7%). These artifacts induce substantial SUVmax overestimation compared to the reference SUV for PSF algorithms at SBR1 and SBR2 leading to substantial MTV underestimation in threshold-based segmentation. In contrast, both PSF algorithms provided the lowest deviation of SUVmean from reference SUV at SBR1 and SBR2. At high contrast, the PSF algorithms provided the highest spatial resolution and lowest SUVmean deviation from the reference SUV. In contrast, both algorithms showed the highest deviations in SUVmax and threshold-based MTV definition. At low contrast, all investigated reconstruction algorithms performed approximately equally. The use of PSF algorithms for quantitative PET data, e.g., for target volume definition or in serial PET studies, should be performed with caution - especially if comparing SUV of lesions with high and low contrasts.

  13. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  14. Study of the mode of angular velocity damping for a spacecraft at non-standard situation

    NASA Astrophysics Data System (ADS)

    Davydov, A. A.; Sazonov, V. V.

    2012-07-01

    Non-standard situation on a spacecraft (Earth's satellite) is considered, when there are no measurements of the spacecraft's angular velocity component relative to one of its body axes. Angular velocity measurements are used in controlling spacecraft's attitude motion by means of flywheels. The arising problem is to study the operation of standard control algorithms in the absence of some necessary measurements. In this work this problem is solved for the algorithm ensuring the damping of spacecraft's angular velocity. Such a damping is shown to be possible not for all initial conditions of motion. In the general case one of two possible final modes is realized, each described by stable steady-state solutions of the equations of motion. In one of them, the spacecraft's angular velocity component relative to the axis, for which the measurements are absent, is nonzero. The estimates of the regions of attraction are obtained for these steady-state solutions by numerical calculations. A simple technique is suggested that allows one to eliminate the initial conditions of the angular velocity damping mode from the attraction region of an undesirable solution. Several realizations of this mode that have taken place are reconstructed. This reconstruction was carried out using approximations of telemetry values of the angular velocity components and the total angular momentum of flywheels, obtained at the non-standard situation, by solutions of the equations of spacecraft's rotational motion.

  15. Development and Translation of Hybrid Optoacoustic/Ultrasonic Tomography for Early Breast Cancer Detection

    DTIC Science & Technology

    2014-09-01

    to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed

  16. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, C.T.; Rong, J.; Dodge, C.W.

    2014-06-15

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% andmore » 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.« less

  17. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    PubMed

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.

  18. Wind reconstruction algorithm for Viking Lander 1

    NASA Astrophysics Data System (ADS)

    Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter

    2017-06-01

    The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  19. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.

    PubMed

    Li, Liang; Wang, Bigong; Wang, Ge

    2016-01-01

    In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.

  1. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  2. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.

    PubMed

    Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.

  3. Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor

    NASA Astrophysics Data System (ADS)

    Shatokhina, Iuliia; Obereder, Andreas; Ramlau, Ronny

    2014-08-01

    We present a fast wavefront reconstruction algorithm developed for an extreme adaptive optics system equipped with a pyramid wavefront sensor on a 42m telescope. The method is called the Preprocessed Cumulative Reconstructor with domain decomposition (P-CuReD). The algorithm is based on the theoretical relationship between pyramid and Shack-Hartmann wavefront sensor data. The algorithm consists of two consecutive steps - a data preprocessing, and an application of the CuReD algorithm, which is a fast method for wavefront reconstruction from Shack-Hartmann sensor data. The closed loop simulation results show that the P-CuReD method provides the same reconstruction quality and is significantly faster than an MVM.

  4. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  5. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    PubMed

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  7. NOTE: A BPF-type algorithm for CT with a curved PI detector

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-01

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  8. A BPF-type algorithm for CT with a curved PI detector.

    PubMed

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-21

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  9. Refraction corrected calibration for aquatic locomotion research: application of Snell's law improves spatial accuracy.

    PubMed

    Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L

    2015-07-07

    Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.

  10. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  11. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  12. Calcium scoring with dual-energy CT in men and women: an anthropomorphic phantom study

    NASA Astrophysics Data System (ADS)

    Li, Qin; Liu, Songtao; Myers, Kyle; Gavrielides, Marios A.; Zeng, Rongping; Sahiner, Berkman; Petrick, Nicholas

    2016-03-01

    This work aimed to quantify and compare the potential impact of gender differences on coronary artery calcium scoring with dual-energy CT. An anthropomorphic thorax phantom with four synthetic heart vessels (diameter 3-4.5 mm: female/male left main and left circumflex artery) were scanned with and without female breast plates. Ten repeat scans were acquired in both single- and dual-energy modes and reconstructed at six reconstruction settings: two slice thicknesses (3 mm, 0.6 mm) and three reconstruction algorithms (FBP, IR3, IR5). Agatston and calcium volume scores were estimated from the reconstructed data using a segmentation-based approach. Total calcium score (summation of four vessels), and male/female calcium scores (summation of male/female vessels scanned in phantom without/with breast plates) were calculated accordingly. Both Agatston and calcium volume scores were found comparable between single- and dual-energy scans (Pearson r= 0.99, p<0.05). The total calcium scores were larger for the thinner slice thickness. Among the scores obtained from the three reconstruction algorithms, FBP yielded the highest and IR5 yielded the lowest scores. The total calcium scores from the phantom without breast plates were significantly larger than those from the phantom with breast plates, and the difference increased with the stronger denoising in iterative algorithm and with thicker slices. Both gender-based anatomical differences and vessel size impacted the calcium scores. The calcium volume scores tended to be underestimated when the vessels were smaller. These findings are valuable for understanding inconsistencies between women and men in calcium scoring, and for standardizing imaging protocols for improved gender-specific calcium scoring.

  13. A low-count reconstruction algorithm for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hsuan-Ming; Liu, Chih-Chieh; Jan, Meei-Ling; Lee, Ming-Wei

    2018-04-01

    The Compton camera is an imaging device which has been proposed to detect prompt gammas (PGs) produced by proton–nuclear interactions within tissue during proton beam irradiation. Compton-based PG imaging has been developed to verify proton ranges because PG rays, particularly characteristic ones, have strong correlations with the distribution of the proton dose. However, accurate image reconstruction from characteristic PGs is challenging because the detector efficiency and resolution are generally low. Our previous study showed that point spread functions can be incorporated into the reconstruction process to improve image resolution. In this study, we proposed a low-count reconstruction algorithm to improve the image quality of a characteristic PG emission by pooling information from other characteristic PG emissions. PGs were simulated from a proton beam irradiated on a water phantom, and a two-stage Compton camera was used for PG detection. The results show that the image quality of the reconstructed characteristic PG emission is improved with our proposed method in contrast to the standard reconstruction method using events from only one characteristic PG emission. For the 4.44 MeV PG rays, both methods can be used to predict the positions of the peak and the distal falloff with a mean accuracy of 2 mm. Moreover, only the proposed method can improve the estimated positions of the peak and the distal falloff of 5.25 MeV PG rays, and a mean accuracy of 2 mm can be reached.

  14. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  15. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  16. Application of distance-dependent resolution compensation and post-reconstruction filtering for myocardial SPECT

    NASA Astrophysics Data System (ADS)

    Hutton, Brian F.; Lau, Yiu H.

    1998-06-01

    Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.

  17. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  18. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics.

    PubMed

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-21

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.

  19. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  20. Reconstructing genome-wide regulatory network of E. coli using transcriptome data and predicted transcription factor activities

    PubMed Central

    2011-01-01

    Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997

  1. Simulation and performance of an artificial retina for 40 MHz track reconstruction

    DOE PAGES

    Abba, A.; Bedeschi, F.; Citterio, M.; ...

    2015-03-05

    We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.

  2. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    PubMed Central

    Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.

    2017-01-01

    One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997

  3. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves

    PubMed Central

    Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018

  4. Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang

    2015-04-01

    PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true concentrations for radioactivity Our data collectively showed that OSEM 2D reconstruction method provides quantitatively accurate reconstructed PET data results.

  5. Pediatric lower extremity mower injuries.

    PubMed

    Hill, Sean M; Elwood, Eric T

    2011-09-01

    Lawn mower injuries in children represent an unfortunate common problem to the plastic reconstructive surgeon. There are approximately 68,000 per year reported in the United States. Compounding this problem is the fact that a standard treatment algorithm does not exist. This study follows a series of 7 pediatric patients treated for lower extremity mower injuries by a single plastic surgeon. The extent of soft tissue injury varied. All patients were treated with negative pressure wound therapy as a bridge to definitive closure. Of the 7 patients, 4 required skin grafts, 1 required primary closure, 1 underwent a lower extremity amputation secondary to wounds, and 1 was repaired using a cross-leg flap. Function limitations were minimal for all of our patients after reconstruction. Our basic treatment algorithm is presented with initial debridement followed by the simplest method possible for wound closure using negative pressure wound therapy, if necessary.

  6. Material Interface Reconstruction in VisIt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meredith, J S

    In this paper, we first survey a variety of approaches to material interface reconstruction and their applicability to visualization, and we investigate the details of the current reconstruction algorithm in the VisIt scientific analysis and visualization tool. We then provide a novel implementation of the original VisIt algorithm that makes use of a wide range of the finite element zoo during reconstruction. This approach results in dramatic improvements in quality and performance without sacrificing the strengths of the VisIt algorithm as it relates to visualization.

  7. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  8. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  9. A Toolbox for Ab Initio 3-D Reconstructions in Single-particle Electron Microscopy

    PubMed Central

    Voss, Neil R; Lyumkis, Dmitry; Cheng, Anchi; Lau, Pick-Wei; Mulder, Anke; Lander, Gabriel C; Brignole, Edward J; Fellmann, Denis; Irving, Christopher; Jacovetty, Erica L; Leung, Albert; Pulokas, James; Quispe, Joel D; Winkler, Hanspeter; Yoshioka, Craig; Carragher, Bridget; Potter, Clinton S

    2010-01-01

    Structure determination of a novel macromolecular complex via single-particle electron microscopy depends upon overcoming the challenge of establishing a reliable 3-D reconstruction using only 2-D images. There are a variety of strategies that deal with this issue, but not all of them are readily accessible and straightforward to use. We have developed a “toolbox” of ab initio reconstruction techniques that provide several options for calculating 3-D volumes in an easily managed and tightly controlled work-flow that adheres to standard conventions and formats. This toolbox is designed to streamline the reconstruction process by removing the necessity for bookkeeping, while facilitating transparent data transfer between different software packages. It currently includes procedures for calculating ab initio reconstructions via random or orthogonal tilt geometry, tomograms, and common lines, all of which have been tested using the 50S ribosomal subunit. Our goal is that the accessibility of multiple independent reconstruction algorithms via this toolbox will improve the ease with which models can be generated, and provide a means of evaluating the confidence and reliability of the final reconstructed map. PMID:20018246

  10. SCENERY: a web application for (causal) network reconstruction from cytometry data

    PubMed Central

    Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper

    2017-01-01

    Abstract Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. PMID:28525568

  11. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  12. Postinjection single photon transmission tomography with ordered-subset algorithms for whole-body PET imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Kinahan, P. E.; Brasse, D.; Comtat, C.; Townsend, D. W.

    2002-02-01

    We have evaluated the penalized ordered-subset transmission reconstruction (OSTR) algorithm for postinjection single photon transmission scanning. The OSTR algorithm of Erdogan and Fessler (1999) uses a more accurate model for transmission tomography than ordered-subsets expectation-maximization (OSEM) when OSEM is applied to the logarithm of the transmission data. The OSTR algorithm is directly applicable to postinjection transmission scanning with a single photon source, as emission contamination from the patient mimics the effect, in the original derivation of OSTR, of random coincidence contamination in a positron source transmission scan. Multiple noise realizations of simulated postinjection transmission data were reconstructed using OSTR, filtered backprojection (FBP), and OSEM algorithms. Due to the nonspecific task performance, or multiple uses, of the transmission image, multiple figures of merit were evaluated, including image noise, contrast, uniformity, and root mean square (rms) error. We show that: 1) the use of a three-dimensional (3-D) regularizing image roughness penalty with OSTR improves the tradeoffs in noise, contrast, and rms error relative to the use of a two-dimensional penalty; 2) OSTR with a 3-D penalty has improved tradeoffs in noise, contrast, and rms error relative to FBP or OSEM; and 3) the use of image standard deviation from a single realization to estimate the true noise can be misleading in the case of OSEM. We conclude that using OSTR with a 3-D penalty potentially allows for shorter postinjection transmission scans in single photon transmission tomography in positron emission tomography (PET) relative to FBP or OSEM reconstructed images with the same noise properties. This combination of singles+OSTR is particularly suitable for whole-body PET oncology imaging.

  13. TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less

  14. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    NASA Astrophysics Data System (ADS)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  15. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    PubMed Central

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463

  16. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.

    PubMed

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan

    2010-01-01

    Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  17. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    PubMed

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  18. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  19. Optimization of view weighting in tilted-plane-based reconstruction algorithms to minimize helical artifacts in multi-slice helical CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang

    2003-05-01

    In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.

  20. Analytic reconstruction of magnetic resonance imaging signal obtained from a periodic encoding field.

    PubMed

    Rybicki, F J; Hrovat, M I; Patz, S

    2000-09-01

    We have proposed a two-dimensional PERiodic-Linear (PERL) magnetic encoding field geometry B(x,y) = g(y)y cos(q(x)x) and a magnetic resonance imaging pulse sequence which incorporates two fields to image a two-dimensional spin density: a standard linear gradient in the x dimension, and the PERL field. Because of its periodicity, the PERL field produces a signal where the phase of the two dimensions is functionally different. The x dimension is encoded linearly, but the y dimension appears as the argument of a sinusoidal phase term. Thus, the time-domain signal and image spin density are not related by a two-dimensional Fourier transform. They are related by a one-dimensional Fourier transform in the x dimension and a new Bessel function integral transform (the PERL transform) in the y dimension. The inverse of the PERL transform provides a reconstruction algorithm for the y dimension of the spin density from the signal space. To date, the inverse transform has been computed numerically by a Bessel function expansion over its basis functions. This numerical solution used a finite sum to approximate an infinite summation and thus introduced a truncation error. This work analytically determines the basis functions for the PERL transform and incorporates them into the reconstruction algorithm. The improved algorithm is demonstrated by (1) direct comparison between the numerically and analytically computed basis functions, and (2) reconstruction of a known spin density. The new solution for the basis functions also lends proof of the system function for the PERL transform under specific conditions.

  1. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  2. Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.

    PubMed

    Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M

    2016-04-01

    To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.

  3. A method for the automatic reconstruction of fetal cardiac signals from magnetocardiographic recordings

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Alleva, G.; Comani, S.

    2005-10-01

    Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.

  4. Extending the FairRoot framework to allow for simulation and reconstruction of free streaming data

    NASA Astrophysics Data System (ADS)

    Al-Turany, M.; Klein, D.; Manafov, A.; Rybalchenko, A.; Uhlig, F.

    2014-06-01

    The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called "samplers" can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.

  5. Vectorization with SIMD extensions speeds up reconstruction in electron tomography.

    PubMed

    Agulleiro, J I; Garzón, E M; García, I; Fernández, J J

    2010-06-01

    Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.

  6. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*

    PubMed Central

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-01-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122

  7. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.

    PubMed

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-02-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.

  8. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  9. HIGH-RESOLUTION LINEAR POLARIMETRIC IMAGING FOR THE EVENT HORIZON TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previousmore » work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.« less

  10. The Liquid Argon Software Toolkit (LArSoft): Goals, Status and Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pordes, Rush; Snider, Erica

    LArSoft is a toolkit that provides a software infrastructure and algorithms for the simulation, reconstruction and analysis of events in Liquid Argon Time Projection Chambers (LArTPCs). It is used by the ArgoNeuT, LArIAT, MicroBooNE, DUNE (including 35ton prototype and ProtoDUNE) and SBND experiments. The LArSoft collaboration provides an environment for the development, use, and sharing of code across experiments. The ultimate goal is to develop fully automatic processes for reconstruction and analysis of LArTPC events. The toolkit is based on the art framework and has a well-defined architecture to interface to other packages, including to GEANT4 and GENIE simulation softwaremore » and the Pandora software development kit for pattern recognition. It is designed to facilitate and support the evolution of algorithms including their transition to new computing platforms. The development of the toolkit is driven by the scientific stakeholders involved. The core infrastructure includes standard definitions of types and constants, means to input experiment geometries as well as meta and event- data in several formats, and relevant general utilities. Examples of algorithms experiments have contributed to date are: photon-propagation; particle identification; hit finding, track finding and fitting; electromagnetic shower identification and reconstruction. We report on the status of the toolkit and plans for future work.« less

  11. High-resolution Linear Polarimetric Imaging for the Event Horizon Telescope

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh; Doeleman, Sheperd S.; Wardle, John F. C.; Bouman, Katherine L.

    2016-09-01

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previous work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.

  12. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.

  13. PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.

    PubMed

    Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge

    2005-08-01

    Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.

  14. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  15. Region-of-interest image reconstruction in circular cone-beam microCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.

    2007-12-15

    Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolutionmore » entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image.« less

  16. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  17. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  18. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  19. NEMA NU 4-Optimized Reconstructions for Therapy Assessment in Cancer Research with the Inveon Small Animal PET/CT System.

    PubMed

    Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas

    2015-06-01

    We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.

  20. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  1. Exact BPF and FBP algorithms for nonstandard saddle curves.

    PubMed

    Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge

    2005-11-01

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.

  2. Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis

    PubMed Central

    Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2015-01-01

    Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408

  3. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  4. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  5. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  6. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  7. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H

    2014-06-15

    Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less

  8. Performance improvements in temperature reconstructions of 2-D tunable diode laser absorption spectroscopy (TDLAS)

    NASA Astrophysics Data System (ADS)

    Choi, Doo-Won; Jeon, Min-Gyu; Cho, Gyeong-Rae; Kamimoto, Takahiro; Deguchi, Yoshihiro; Doh, Deog-Hee

    2016-02-01

    Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy (TDLAS). Multiplicative Algebraic Reconstruction Technique (MART) algorithm was adopted for data reconstruction. The data obtained in an experiment for the measurement of temperature and concentration fields of gas flows were used. The measurement theory is based upon the Beer-Lambert law, and the measurement system consists of a tunable laser, collimators, detectors, and an analyzer. Methane was used as a fuel for combustion with air in the Bunsen-type burner. The data used for the reconstruction are from the optical signals of 8-laser beams passed on a cross-section of the methane flame. The performances of MART algorithm in data reconstruction were validated and compared with those obtained by Algebraic Reconstruction Technique (ART) algorithm.

  9. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  10. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  11. Optical cone beam tomography of Cherenkov-mediated signals for fast 3D dosimetry of x-ray photon beams in water.

    PubMed

    Glaser, Adam K; Andreozzi, Jacqueline M; Zhang, Rongxiao; Pogue, Brian W; Gladstone, David J

    2015-07-01

    To test the use of a three-dimensional (3D) optical cone beam computed tomography reconstruction algorithm, for estimation of the imparted 3D dose distribution from megavoltage photon beams in a water tank for quality assurance, by imaging the induced Cherenkov-excited fluorescence (CEF). An intensified charge-coupled device coupled to a standard nontelecentric camera lens was used to tomographically acquire two-dimensional (2D) projection images of CEF from a complex multileaf collimator (MLC) shaped 6 MV linear accelerator x-ray photon beam operating at a dose rate of 600 MU/min. The resulting projections were used to reconstruct the 3D CEF light distribution, a potential surrogate of imparted dose, using a Feldkamp-Davis-Kress cone beam back reconstruction algorithm. Finally, the reconstructed light distributions were compared to the expected dose values from one-dimensional diode scans, 2D film measurements, and the 3D distribution generated from the clinical Varian ECLIPSE treatment planning system using a gamma index analysis. A Monte Carlo derived correction was applied to the Cherenkov reconstructions to account for beam hardening artifacts. 3D light volumes were successfully reconstructed over a 400 × 400 × 350 mm(3) volume at a resolution of 1 mm. The Cherenkov reconstructions showed agreement with all comparative methods and were also able to recover both inter- and intra-MLC leaf leakage. Based upon a 3%/3 mm criterion, the experimental Cherenkov light measurements showed an 83%-99% pass fraction depending on the chosen threshold dose. The results from this study demonstrate the use of optical cone beam computed tomography using CEF for the profiling of the imparted dose distribution from large area megavoltage photon beams in water.

  12. Generation of high-dynamic range image from digital photo

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han

    2016-10-01

    A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.

  13. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  14. Comparison of SeaWinds Backscatter Imaging Algorithms

    PubMed Central

    Long, David G.

    2017-01-01

    This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143

  15. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  16. GREIT: a unified approach to 2D linear EIT reconstruction of lung images.

    PubMed

    Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K

    2009-06-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.

  17. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  18. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging.

    PubMed

    Lauzier, Pascal Theriault; Tang, Jie; Speidel, Michael A; Chen, Guang-Hong

    2012-07-01

    To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise and streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Images reconstructed using FBP showed a highly nonuniform spatial distribution of noise. This spatial nonuniformity led to large fluctuations in the temporal direction. In the numerical phantom study, the level of noise was shown to vary by as much as 87% within a given image, and as much as 110% between different time frames for a ROI far from isocenter. The spatially nonuniform noise pattern was shown to correlate with the source trajectory and the object structure. In contrast, images reconstructed using SIR showed a highly uniform spatial distribution of noise, leading to smaller unexpected noise fluctuations in the temporal direction when a short scan angular range was used. In the numerical phantom study, the noise varied by less than 37% within a given image, and by less than 20% between different time frames. Also, the noise standard deviation in SIR images was on average half of that of FBP images. In the in vivo studies, the deviation observed between quantitative perfusion metrics measured from low-dose scans and high-dose scans was mitigated when SIR was used instead of FBP to reconstruct images. (1) Images reconstructed using FBP suffered from nonuniform spatial noise levels. This nonuniformity is another manifestation of the detrimental effects caused by short-scan reconstruction in CT MPI. (2) Images reconstructed using SIR had a much lower and more uniform noise level and thus can be used as a potential solution to address the FBP nonuniformity. (3) Given the improvement in the accuracy of the perfusion metrics when using SIR, it may be desirable to use a statistical reconstruction framework to perform low-dose dynamic CT MPI.

  19. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lauzier, Pascal Theriault; Tang Jie; Speidel, Michael A.

    Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise andmore » streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution of noise. This spatial nonuniformity led to large fluctuations in the temporal direction. In the numerical phantom study, the level of noise was shown to vary by as much as 87% within a given image, and as much as 110% between different time frames for a ROI far from isocenter. The spatially nonuniform noise pattern was shown to correlate with the source trajectory and the object structure. In contrast, images reconstructed using SIR showed a highly uniform spatial distribution of noise, leading to smaller unexpected noise fluctuations in the temporal direction when a short scan angular range was used. In the numerical phantom study, the noise varied by less than 37% within a given image, and by less than 20% between different time frames. Also, the noise standard deviation in SIR images was on average half of that of FBP images. In the in vivo studies, the deviation observed between quantitative perfusion metrics measured from low-dose scans and high-dose scans was mitigated when SIR was used instead of FBP to reconstruct images. Conclusions: (1) Images reconstructed using FBP suffered from nonuniform spatial noise levels. This nonuniformity is another manifestation of the detrimental effects caused by short-scan reconstruction in CT MPI. (2) Images reconstructed using SIR had a much lower and more uniform noise level and thus can be used as a potential solution to address the FBP nonuniformity. (3) Given the improvement in the accuracy of the perfusion metrics when using SIR, it may be desirable to use a statistical reconstruction framework to perform low-dose dynamic CT MPI.« less

  20. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging

    PubMed Central

    Lauzier, Pascal Thériault; Tang, Jie; Speidel, Michael A.; Chen, Guang-Hong

    2012-01-01

    Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise and streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution of noise. This spatial nonuniformity led to large fluctuations in the temporal direction. In the numerical phantom study, the level of noise was shown to vary by as much as 87% within a given image, and as much as 110% between different time frames for a ROI far from isocenter. The spatially nonuniform noise pattern was shown to correlate with the source trajectory and the object structure. In contrast, images reconstructed using SIR showed a highly uniform spatial distribution of noise, leading to smaller unexpected noise fluctuations in the temporal direction when a short scan angular range was used. In the numerical phantom study, the noise varied by less than 37% within a given image, and by less than 20% between different time frames. Also, the noise standard deviation in SIR images was on average half of that of FBP images. In the in vivo studies, the deviation observed between quantitative perfusion metrics measured from low-dose scans and high-dose scans was mitigated when SIR was used instead of FBP to reconstruct images. Conclusions: (1) Images reconstructed using FBP suffered from nonuniform spatial noise levels. This nonuniformity is another manifestation of the detrimental effects caused by short-scan reconstruction in CT MPI. (2) Images reconstructed using SIR had a much lower and more uniform noise level and thus can be used as a potential solution to address the FBP nonuniformity. (3) Given the improvement in the accuracy of the perfusion metrics when using SIR, it may be desirable to use a statistical reconstruction framework to perform low-dose dynamic CT MPI. PMID:22830741

  1. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  2. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  3. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    PubMed

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  4. A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm

    NASA Astrophysics Data System (ADS)

    Auroux, D.; Blum, J.

    2008-03-01

    This paper deals with a new data assimilation algorithm, called Back and Forth Nudging. The standard nudging technique consists in adding to the equations of the model a relaxation term that is supposed to force the observations to the model. The BFN algorithm consists in repeatedly performing forward and backward integrations of the model with relaxation (or nudging) terms, using opposite signs in the direct and inverse integrations, so as to make the backward evolution numerically stable. This algorithm has first been tested on the standard Lorenz model with discrete observations (perfect or noisy) and compared with the variational assimilation method. The same type of study has then been performed on the viscous Burgers equation, comparing again with the variational method and focusing on the time evolution of the reconstruction error, i.e. the difference between the reference trajectory and the identified one over a time period composed of an assimilation period followed by a prediction period. The possible use of the BFN algorithm as an initialization for the variational method has also been investigated. Finally the algorithm has been tested on a layered quasi-geostrophic model with sea-surface height observations. The behaviours of the two algorithms have been compared in the presence of perfect or noisy observations, and also for imperfect models. This has allowed us to reach a conclusion concerning the relative performances of the two algorithms.

  5. High-definition multidetector computed tomography for evaluation of coronary artery stents: comparison to standard-definition 64-detector row computed tomography.

    PubMed

    Min, James K; Swaminathan, Rajesh V; Vass, Melissa; Gallagher, Scott; Weinsaft, Jonathan W

    2009-01-01

    The assessment of coronary stents with present-generation 64-detector row computed tomography scanners that use filtered backprojection and operating at standard definition of 0.5-0.75 mm (standard definition, SDCT) is limited by imaging artifacts and noise. We evaluated the performance of a novel, high-definition 64-slice CT scanner (HDCT), with improved spatial resolution (0.23 mm) and applied statistical iterative reconstruction (ASIR) for evaluation of coronary artery stents. HDCT and SDCT stent imaging was performed with the use of an ex vivo phantom. HDCT was compared with SDCT with both smooth and sharp kernels for stent intraluminal diameter, intraluminal area, and image noise. Intrastent visualization was assessed with an ASIR algorithm on HDCT scans, compared with the filtered backprojection algorithms by SDCT. Six coronary stents (2.5, 2.5, 2.75, 3.0, 3.5, 4.0mm) were analyzed by 2 independent readers. Interobserver correlation was high for both HDCT and SDCT. HDCT yielded substantially larger luminal area visualization compared with SDCT, both for smooth (29.4+/-14.5 versus 20.1+/-13.0; P<0.001) and sharp (32.0+/-15.2 versus 25.5+/-12.0; P<0.001) kernels. Stent diameter was higher with HDCT compared with SDCT, for both smooth (1.54+/-0.59 versus1.00+/-0.50; P<0.0001) and detailed (1.47+/-0.65 versus 1.08+/-0.54; P<0.0001) kernels. With detailed kernels, HDCT scans that used algorithms showed a trend toward decreased image noise compared with SDCT-filtered backprojection algorithms. On the basis of this ex vivo study, HDCT provides superior detection of intrastent luminal area and diameter visualization, compared with SDCT. ASIR image reconstruction techniques for HDCT scans enhance the in-stent assessment while decreasing image noise.

  6. Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI

    PubMed Central

    Xia, Rongmin; Li, Xu

    2010-01-01

    We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363

  7. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-01-01

    To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 degrees, respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78 +/- 0.57) mm or less. The theta and phi angle errors were found to be (5.7 +/- 4.9) degrees and (6.0 +/- 4.1) degrees, respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 degrees compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 degrees demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.

  8. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2011-01-15

    Purpose: To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. Methods: The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations weremore » performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. Results: In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 deg., respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78{+-}0.57) mm or less. The {theta} and {phi} angle errors were found to be (5.7{+-}4.9) deg. and (6.0{+-}4.1) deg., respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 deg. compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. Conclusions: This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 deg. demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.« less

  9. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  10. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  12. Regularization iteration imaging algorithm for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  13. Reconstructing cortical current density by exploring sparseness in the transform domain

    NASA Astrophysics Data System (ADS)

    Ding, Lei

    2009-05-01

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  14. A general Bayesian image reconstruction algorithm with entropy prior: Preliminary application to HST data

    NASA Astrophysics Data System (ADS)

    Nunez, Jorge; Llacer, Jorge

    1993-10-01

    This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.

  15. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  16. Exact BPF and FBP algorithms for nonstandard saddle curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Hengyong; Zhao Shiying; Ye Yangbo

    2005-11-15

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less

  17. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  18. LOGISMOS-B for primates: primate cortical surface reconstruction and thickness measurement

    NASA Astrophysics Data System (ADS)

    Oguz, Ipek; Styner, Martin; Sanchez, Mar; Shi, Yundi; Sonka, Milan

    2015-03-01

    Cortical thickness and surface area are important morphological measures with implications for many psychiatric and neurological conditions. Automated segmentation and reconstruction of the cortical surface from 3D MRI scans is challenging due to the variable anatomy of the cortex and its highly complex geometry. While many methods exist for this task in the context of the human brain, these methods are typically not readily applicable to the primate brain. We propose an innovative approach based on our recently proposed human cortical reconstruction algorithm, LOGISMOS-B, and the Laplace-based thickness measurement method. Quantitative evaluation of our approach was performed based on a dataset of T1- and T2-weighted MRI scans from 12-month-old macaques where labeling by our anatomical experts was used as independent standard. In this dataset, LOGISMOS-B has an average signed surface error of 0.01 +/- 0.03mm and an unsigned surface error of 0.42 +/- 0.03mm over the whole brain. Excluding the rather problematic temporal pole region further improves unsigned surface distance to 0.34 +/- 0.03mm. This high level of accuracy reached by our algorithm even in this challenging developmental dataset illustrates its robustness and its potential for primate brain studies.

  19. Particle identification algorithms for the PANDA Endcap Disc DIRC

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.

    2017-12-01

    The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.

  20. Resolving boosted jets with XCone

    DOE PAGES

    Thaler, Jesse; Wilkason, Thomas F.

    2015-12-01

    We show how the recently proposed XCone jet algorithm smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies $-$ dijet resonances,more » Higgs decays to bottom quarks, and all-hadronic top pairs$-$ that demonstrate the physics applications of XCone over a wide kinematic range.« less

  1. Third-dimension information retrieval from a single convergent-beam transmission electron diffraction pattern using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Pennington, Robert S.; Van den Broek, Wouter; Koch, Christoph T.

    2014-05-01

    We have reconstructed third-dimension specimen information from convergent-beam electron diffraction (CBED) patterns simulated using the stacked-Bloch-wave method. By reformulating the stacked-Bloch-wave formalism as an artificial neural network and optimizing with resilient back propagation, we demonstrate specimen orientation reconstructions with depth resolutions down to 5 nm. To show our algorithm's ability to analyze realistic data, we also discuss and demonstrate our algorithm reconstructing from noisy data and using a limited number of CBED disks. Applicability of this reconstruction algorithm to other specimen parameters is discussed.

  2. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    PubMed

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  3. DART: a practical reconstruction algorithm for discrete tomography.

    PubMed

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  4. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  5. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  6. Rapid execution of fan beam image reconstruction algorithms using efficient computational techniques and special-purpose processors

    NASA Astrophysics Data System (ADS)

    Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.

    1981-02-01

    Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.

  7. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  8. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.

    2010-01-15

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less

  9. [Algorithms for treatment of complex hand injuries].

    PubMed

    Pillukat, T; Prommersberger, K-J

    2011-07-01

    The primary treatment strongly influences the course and prognosis of hand injuries. Complex injuries which compromise functional recovery are especially challenging. Despite an apparently unlimited number of injury patterns it is possible to develop strategies which facilitate a standardized approach to operative treatment. In this situation algorithms can be important guidelines for a rational approach. The following algorithms have been proven in the treatment of complex injuries of the hand by our own experience. They were modified according to the current literature and refer to prehospital care, emergency room management, basic strategy in general and reconstruction of bone and joints, vessels, nerves, tendons and soft tissue coverage in detail. Algorithms facilitate the treatment of severe hand injuries. Applying simple yes/no decisions complex injury patterns are split into distinct partial problems which can be managed step by step.

  10. Image quality comparison of two adaptive statistical iterative reconstruction (ASiR, ASiR-V) algorithms and filtered back projection in routine liver CT.

    PubMed

    Chen, Li-Hong; Jin, Chao; Li, Jian-Ying; Wang, Ge-Liang; Jia, Yong-Jun; Duan, Hai-Feng; Pan, Ning; Guo, Jianxin

    2018-06-06

    To compare image quality of two adaptive statistical iterative reconstruction (ASiR and ASiR-V) algorithms using objective and subjective metrics for routine liver CT, with the conventional filtered back projection (FBP) reconstructions as reference standards. This institutional review board-approved study included 52 patients with clinically suspected hepatic metastases. Patients were divided equally into ASiR and ASiR-V groups with same scan parameters. Images were reconstructed with ASiR and ASiR-V from 0 (FBP) to 100% blending percentages at 10% interval in its respective group. Mean and standard deviation of CT numbers for liver parenchyma were recorded. Two experienced radiologists reviewed all images for image quality blindly and independently. Data were statistically analyzed. There was no difference in CT dose index between ASiR and ASiR-V groups. As the percentage of ASiR and ASiR-V increased from 10 to 100% , image noise reduced by 8.6 -57.9% and 8.9-81.6%, respectively, compared with FBP. There was substantial interobserver agreement in image quality assessment for ASiR and ASiR-V images. Compared with FBP reconstruction, subjective image quality scores of ASiR and ASiR-V improved significantly as percentage increased from 10 to 80% for ASiR (peaked at 50% with 32.2% noise reduction) and from 10 to 90% (peaked at 60% with 51.5% noise reduction) for ASiR-V. Both ASiR and ASiR-V improved the objective and subjective image quality for routine liver CT compared with FBP. ASiR-V provided further image quality improvement with higher acceptable percentage than ASiR, and ASiR-V60% had the highest image quality score. Advances in knowledge: (1) Both ASiR and ASiR-V significantly reduce image noise compared with conventional FBP reconstruction. (2) ASiR-V with 60 blending percentage provides the highest image quality score in routine liver CT.

  11. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  12. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  13. Functional validation and comparison framework for EIT lung imaging.

    PubMed

    Grychtol, Bartłomiej; Elke, Gunnar; Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy

    2014-01-01

    Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.

  14. CORRIGENDUM: A new algorithm for the shape reconstruction of perfectly conducting objects A new algorithm for the shape reconstruction of perfectly conducting objects

    NASA Astrophysics Data System (ADS)

    Çayören, M.; Akduman, I.; Yapar, A.; Crocco, L.

    2010-03-01

    The reference list should have included the conference communications [1] and [2], wherein we introduced the algorithm described in this paper. Note that a less complete description of the algorithm was given in [1]. However, the example considering a bean-shaped target is the same in the two papers and it is reused in this paper by kind permission of the Applied Computational Electromagnetics Society. References [1] Crocco L, Akduman I, Çayören M and Yapar A 2007 A new method for shape reconstruction of perfectly conducting targets The 23rd Annual Review of Progress in Applied Computational Electromagnetics (Verona, Italy) [2] Çayören M, Akduman I, Yapar A and Crocco L 2007 A new algorithm for the shape reconstruction of perfectly conducting objects Progress in Electromagnetics Research Symposium (PIERS) (Beijing, PRC)

  15. Simulation studies of the fidelity of biomolecular structure ensemble recreation

    NASA Astrophysics Data System (ADS)

    Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.

    2006-12-01

    We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.

  16. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms.

    PubMed

    Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G

    2017-11-01

    Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV 1 -MTV 2 , MTV 1 -GBSV, MTV 2 -GBSV; gray-levels: 64-32, 64-128, and 64-256; reconstruction algorithms: OSEM-FORE-OSEM, OSEM-FOREFBP, and OSEM-3DRP). We used |d¯| as a measure of radiomic feature reproducibility level, where any feature scored |d¯| ±SD ≤ |25|% ± 35% was considered reproducible. We used Bland-Altman analysis to evaluate the mean, standard deviation (SD), and upper/lower reproducibility limits (U/LRL) for radiomic features in response to variation in each testing parameter. Furthermore, we proposed U/LRL as a method to classify the level of reproducibility: High- ±1% ≤ U/LRL ≤ ±30%; Intermediate- ±30% < U/LRL ≤ ±45%; Low- ±45 < U/LRL ≤ ±50%. We considered any feature below the low level as nonreproducible (NR). Finally, we calculated the interclass correlation coefficient (ICC) to evaluate the reliability of radiomic feature measurements for each parameter. The segmented volumes of 65 patients (81.3%) scored Dice coefficient >0.75 for all three volumes. The result outcomes revealed a tendency of higher radiomic feature reproducibility among segmentation pair MTV 1 -GBSV than MTV 2 -GBSV, gray-level pairs of 64-32 and 64-128 than 64-256, and reconstruction algorithm pairs of OSEM-FOREIR and OSEM-FOREFBP than OSEM-3DRP. Although the choice of cervical tumor segmentation method, gray-level value, and reconstruction algorithm may affect radiomic features, some features were characterized by high reproducibility through all testing parameters. The number of radiomic features that showed insensitivity to variations in segmentation methods, gray-level discretization, and reconstruction algorithms was 10 (13%), 4 (5%), and 1 (1%), respectively. These results suggest that a careful analysis of the effects of these parameters is essential prior to any radiomics clinical application. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Using flow information to support 3D vessel reconstruction from rotational angiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waechter, Irina; Bredno, Joerg; Weese, Juergen

    2008-07-15

    For the assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) morphologic and hemodynamic information about the vessel system. Rotational angiography is routinely used to image the 3D vascular geometry and we have shown previously that rotational subtraction angiography has the potential to also give quantitative information about blood flow. Flow information can be determined when the angiographic sequence shows inflow and possibly outflow of contrast agent. However, a standard volume reconstruction assumes that the vessel tree is uniformly filled with contrast agent during the whole acquisition. If this is not the case, the reconstruction exhibits artifacts. Here,more » we show how flow information can be used to support the reconstruction of the 3D vessel centerline and radii in this case. Our method uses the fast marching algorithm to determine the order in which voxels are analyzed. For every voxel, the rotational time intensity curve (R-TIC) is determined from the image intensities at the projection points of the current voxel. Next, the bolus arrival time of the contrast agent at the voxel is estimated from the R-TIC. Then, a measure of the intensity and duration of the enhancement is determined, from which a speed value is calculated that steers the propagation of the fast marching algorithm. The results of the fast marching algorithm are used to determine the 3D centerline by backtracking. The 3D radius is reconstructed from 2D radius estimates on the projection images. The proposed method was tested on computer simulated rotational angiography sequences with systematically varied x-ray acquisition, blood flow, and contrast agent injection parameters and on datasets from an experimental setup using an anthropomorphic cerebrovascular phantom. For the computer simulation, the mean absolute error of the 3D centerline and 3D radius estimation was 0.42 and 0.25 mm, respectively. For the experimental datasets, the mean absolute error of the 3D centerline was 0.45 mm. Under pulsatile and nonpulsatile conditions, flow information can be used to enable a 3D vessel reconstruction from rotational angiography with inflow and possibly outflow of contrast agent. We found that the most important parameter for the quality of the reconstruction of centerline and radii is the range through which the x-ray system rotates in the time span of the injection. Good results were obtained if this range was at least 135 deg. . As a standard c-arm can rotate 205 deg., typically one third of the acquisition can show inflow or outflow of contrast agent, which is required for the quantification of blood flow from rotational angiography.« less

  18. WE-G-18A-08: Axial Cone Beam DBPF Reconstruction with Three-Dimensional Weighting and Butterfly Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Wang, W; Tang, X

    2014-06-15

    Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less

  19. The development and use of a new methodology to reconstruct courses of admission and ambulatory care based on the Danish National Patient Registry.

    PubMed

    Gubbels, Sophie; Nielsen, Kenn Schultz; Sandegaard, Jakob; Mølbak, Kåre; Nielsen, Jens

    2016-11-01

    The Danish National Patient Registry (DNPR) contains clinical and administrative data on all patients treated in Danish hospitals. The data model used for reporting is based on standardized coding of contacts rather than courses of admissions and ambulatory care. To reconstruct a coherent picture of courses of admission and ambulatory care, we designed an algorithm with 28 rules that manages transfers between departments, between hospitals and inconsistencies in the data, e.g., missing time stamps, overlaps and gaps. We used data from patients admitted between 1 January 2010 and 31 December 2014. After application of the DNPR algorithm, we estimated an average of 1,149,616 courses of admission per year or 205 hospitalizations per 1000 inhabitants per year. The median length of stay decreased from 1.58days in 2010 to 1.29days in 2014. The number of transfers between departments within a hospital increased from 111,576 to 176,134 while the number of transfers between hospitals decreased from 68,522 to 61,203. We standardized a 28-rule algorithm to relate registrations in the DNPR to each other in a coherent way. With the algorithm, we estimated 1.15 million courses of admissions per year, which probably reflects a more accurate estimate than the estimates that have been published previously. Courses of admission became shorter between 2010 and 2014 and outpatient contacts longer. These figures are compatible with a cost-conscious secondary healthcare system undertaking specialized treatment within a hospital and limiting referral to advanced services at other hospitals. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Anticoagulative strategies in reconstructive surgery – clinical significance and applicability

    PubMed Central

    Jokuszies, Andreas; Herold, Christian; Niederbichler, Andreas D.; Vogt, Peter M.

    2012-01-01

    Advanced strategies in reconstructive microsurgery and especially free tissue transfer with advanced microvascular techniques have been routinely applied and continously refined for more than three decades in day-to-day clinical work. Bearing in mind the success rates of more than 95%, the value of these techniques in patient care and comfort (one-step reconstruction of even the most complex tissue defects) cannot be underestimated. However, anticoagulative protocols and practices are far from general acceptance and – most importantly – lack the benchmark of evidence basis while the reconstructive and microsurgical methods are mostly standardized. Therefore, the aim of our work was to review the actual literature and synoptically lay out the mechanisms of action of the plethora of anticoagulative substances. The pharmacologic prevention and the surgical intervention of thrombembolic events represent an established and essential part of microsurgery. The high success rates of microvascular free tissue transfer as of today are due to treatment of patients in reconstructive centers where proper patient selection, excellent microsurgical technique, tissue transfer to adequate recipient vessels, and early anastomotic revision in case of thrombosis is provided. Whether the choice of antithrombotic agents is a factor of success remains still unclear. Undoubtedly however the lack of microsurgical experience and bad technique can never be compensated by any regimen of antithrombotic therapy. All the more, the development of consistent standards and algorithms in reconstructive microsurgery is absolutely essential to optimize clinical outcomes and increase multicentric and international comparability of postoperative results and complications. PMID:22294976

  1. Evaluation of noise and blur effects with SIRT-FISTA-TV reconstruction algorithm: Application to fast environmental transmission electron tomography.

    PubMed

    Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita

    2018-06-01

    Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. SCENERY: a web application for (causal) network reconstruction from cytometry data.

    PubMed

    Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper; Tsamardinos, Ioannis

    2017-07-03

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  4. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  5. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  6. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  7. Research of centroiding algorithms for extended and elongated spot of sodium laser guide star

    NASA Astrophysics Data System (ADS)

    Shao, Yayun; Zhang, Yudong; Wei, Kai

    2016-10-01

    Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.

  8. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  9. Using an external gating signal to estimate noise in PET with an emphasis on tracer avid tumors

    NASA Astrophysics Data System (ADS)

    Schmidtlein, C. R.; Beattie, B. J.; Bailey, D. L.; Akhurst, T. J.; Wang, W.; Gönen, M.; Kirov, A. S.; Humm, J. L.

    2010-10-01

    The purpose of this study is to establish and validate a methodology for estimating the standard deviation of voxels with large activity concentrations within a PET image using replicate imaging that is immediately available for use in the clinic. To do this, ensembles of voxels in the averaged replicate images were compared to the corresponding ensembles in images derived from summed sinograms. In addition, the replicate imaging noise estimate was compared to a noise estimate based on an ensemble of voxels within a region. To make this comparison two phantoms were used. The first phantom was a seven-chamber phantom constructed of 1 liter plastic bottles. Each chamber of this phantom was filled with a different activity concentration relative to the lowest activity concentration with ratios of 1:1, 1:1, 2:1, 2:1, 4:1, 8:1 and 16:1. The second phantom was a GE Well-Counter phantom. These phantoms were imaged and reconstructed on a GE DSTE PET/CT scanner with 2D and 3D reprojection filtered backprojection (FBP), and with 2D- and 3D-ordered subset expectation maximization (OSEM). A series of tests were applied to the resulting images that showed that the region and replicate imaging methods for estimating standard deviation were equivalent for backprojection reconstructions. Furthermore, the noise properties of the FBP algorithms allowed scaling the replicate estimates of the standard deviation by a factor of 1/\\sqrt{N}, where N is the number of replicate images, to obtain the standard deviation of the full data image. This was not the case for OSEM image reconstruction. Due to nonlinearity of the OSEM algorithm, the noise is shown to be both position and activity concentration dependent in such a way that no simple scaling factor can be used to extrapolate noise as a function of counts. The use of the Well-Counter phantom contributed to the development of a heuristic extrapolation of the noise as a function of radius in FBP. In addition, the signal-to-noise ratio for high uptake objects was confirmed to be higher with backprojection image reconstruction methods. These techniques were applied to several patient data sets acquired in either 2D or 3D mode, with 18F (FLT and FDG). Images of the standard deviation and signal-to-noise ratios were constructed and the standard deviations of the tumors' uptake were determined. Finally, a radial noise extrapolation relationship deduced in this paper was applied to patient data.

  10. Transformation diffusion reconstruction of three-dimensional histology volumes from two-dimensional image stacks.

    PubMed

    Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2017-05-01

    Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. SU-E-I-04: Improving CT Quality for Radiation Therapy of Patients with High Body Mass Index Using Iterative Reconstruction Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noid, G; Tai, A; Li, X

    2015-06-15

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. The CT IQ for patients with a high Body Mass Index (BMI) can suffer from increased noise due to photon starvation. The purpose of this study is to investigate and to quantify the IQ enhancement for high BMI patients through the application of IR algorithms. Methods: CT raw data collected for 6 radiotherapy (RT) patients with BMI, greater than or equal to 30 were retrospectively analyzed. All CT data were acquired using a CT scanner (Somaton Definition ASmore » Open, Siemens) installed in a linac room (CT-on-rails) using standard imaging protocols. The CT data were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared and correlated with patient depth and BMI. The patient depth was defined as the largest distance from anterior to posterior along the bilateral symmetry axis. Results: IR techniques are demonstrated to preserve contrast and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is roughly doubled by adopting the highest SAFIRE strength. A significant correlation was observed between patient depth and IR noise reduction through Pearson’s correlation test (R = 0.9429/P = 0.0167). The mean patient depth was 30.4 cm and the average relative noise reduction for the strongest iterative reconstruction was 55%. Conclusion: The IR techniques produce a measureable enhancement to CT IQ by reducing the noise. Dramatic noise reduction is evident for the high BMI patients. The improved CT IQ enables more accurate delineation of tumors and organs at risk and more accuarte dose calculations for RT planning and delivery guidance. Supported by Siemens.« less

  12. Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms

    NASA Astrophysics Data System (ADS)

    Samanta, A.; Todd, L. A.

    A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.

  13. LBP-based penalized weighted least-squares approach to low-dose cone-beam computed tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Wang, Huafeng; Liu, Yan; Zhang, Hao; Gu, Xianfeng; Liang, Zhengrong

    2014-03-01

    Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose. The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares (PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.

  14. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware.

    PubMed

    Kole, J S; Beekman, F J

    2006-02-21

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  15. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  16. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  17. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  18. Wind velocity profile reconstruction from intensity fluctuations of a plane wave propagating in a turbulent atmosphere.

    PubMed

    Banakh, V A; Marakasov, D A

    2007-08-01

    Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.

  19. Fast-SG: an alignment-free algorithm for hybrid assembly.

    PubMed

    Di Genova, Alex; Ruz, Gonzalo A; Sagot, Marie-France; Maass, Alejandro

    2018-05-01

    Long-read sequencing technologies are the ultimate solution for genome repeats, allowing near reference-level reconstructions of large genomes. However, long-read de novo assembly pipelines are computationally intense and require a considerable amount of coverage, thereby hindering their broad application to the assembly of large genomes. Alternatively, hybrid assembly methods that combine short- and long-read sequencing technologies can reduce the time and cost required to produce de novo assemblies of large genomes. Here, we propose a new method, called Fast-SG, that uses a new ultrafast alignment-free algorithm specifically designed for constructing a scaffolding graph using light-weight data structures. Fast-SG can construct the graph from either short or long reads. This allows the reuse of efficient algorithms designed for short-read data and permits the definition of novel modular hybrid assembly pipelines. Using comprehensive standard datasets and benchmarks, we show how Fast-SG outperforms the state-of-the-art short-read aligners when building the scaffoldinggraph and can be used to extract linking information from either raw or error-corrected long reads. We also show how a hybrid assembly approach using Fast-SG with shallow long-read coverage (5X) and moderate computational resources can produce long-range and accurate reconstructions of the genomes of Arabidopsis thaliana (Ler-0) and human (NA12878). Fast-SG opens a door to achieve accurate hybrid long-range reconstructions of large genomes with low effort, high portability, and low cost.

  20. Gaussian process tomography for soft x-ray spectroscopy at WEST without equilibrium information

    NASA Astrophysics Data System (ADS)

    Wang, T.; Mazon, D.; Svensson, J.; Li, D.; Jardin, A.; Verdoolaege, G.

    2018-06-01

    Gaussian process tomography (GPT) is a recently developed tomography method based on the Bayesian probability theory [J. Svensson, JET Internal Report EFDA-JET-PR(11)24, 2011 and Li et al., Rev. Sci. Instrum. 84, 083506 (2013)]. By modeling the soft X-ray (SXR) emissivity field in a poloidal cross section as a Gaussian process, the Bayesian SXR tomography can be carried out in a robust and extremely fast way. Owing to the short execution time of the algorithm, GPT is an important candidate for providing real-time reconstructions with a view to impurity transport and fast magnetohydrodynamic control. In addition, the Bayesian formalism allows quantifying uncertainty on the inferred parameters. In this paper, the GPT technique is validated using a synthetic data set expected from the WEST tokamak, and the results are shown of its application to the reconstruction of SXR emissivity profiles measured on Tore Supra. The method is compared with the standard algorithm based on minimization of the Fisher information.

  1. Interleaved diffusion-weighted EPI improved by adaptive partial-Fourier and multi-band multiplexed sensitivity-encoding reconstruction

    PubMed Central

    Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei

    2014-01-01

    Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000

  2. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  3. ECG-gated interventional cardiac reconstruction for non-periodic motion.

    PubMed

    Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim

    2010-01-01

    The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.

  4. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  5. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    PubMed Central

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun

    2018-01-01

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170

  6. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    PubMed

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  7. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    NASA Astrophysics Data System (ADS)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.

  8. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  9. ART 3.5D: an algorithm to label arteries and veins from three-dimensional angiography.

    PubMed

    Barra, Beatrice; De Momi, Elena; Ferrigno, Giancarlo; Pero, Guglielmo; Cardinale, Francesco; Baselli, Giuseppe

    2016-10-01

    Preoperative three-dimensional (3-D) visualization of brain vasculature by digital subtraction angiography from computerized tomography (CT) in neurosurgery is gaining more and more importance, since vessels are the primary landmarks both for organs at risk and for navigation. Surgical embolization of cerebral aneurysms and arteriovenous malformations, epilepsy surgery, and stereoelectroencephalography are a few examples. Contrast-enhanced cone-beam computed tomography (CE-CBCT) represents a powerful facility, since it is capable of acquiring images in the operation room, shortly before surgery. However, standard 3-D reconstructions do not provide a direct distinction between arteries and veins, which is of utmost importance and is left to the surgeon's inference so far. Pioneering attempts by true four-dimensional (4-D) CT perfusion scans were already described, though at the expense of longer acquisition protocols, higher dosages, and sensible resolution losses. Hence, space is open to approaches attempting to recover the contrast dynamics from standard CE-CBCT, on the basis of anomalies overlooked in the standard 3-D approach. This paper aims at presenting algebraic reconstruction technique (ART) 3.5D, a method that overcomes the clinical limitations of 4-D CT, from standard 3-D CE-CBCT scans. The strategy works on the 3-D angiography, previously segmented in the standard way, and reprocesses the dynamics hidden in the raw data to recover an approximate dynamics in each segmented voxel. Next, a classification algorithm labels the angiographic voxels and artery or vein. Numerical simulations were performed on a digital phantom of a simplified 3-D vasculature with contrast transit. CE-CBCT projections were simulated and used for ART 3.5D testing. We achieved up to 90% classification accuracy in simulations, proving the feasibility of the presented approach for dynamic information recovery for arteries and veins segmentation.

  10. Functional Validation and Comparison Framework for EIT Lung Imaging

    PubMed Central

    Meybohm, Patrick; Weiler, Norbert; Frerichs, Inéz; Adler, Andy

    2014-01-01

    Introduction Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited. Methods We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data. Results and Conclusions Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT. PMID:25110887

  11. Plenoptic projection fluorescence tomography.

    PubMed

    Iglesias, Ignacio; Ripoll, Jorge

    2014-09-22

    A new method to obtain the three-dimensional localization of fluorochrome distributions in micrometric samples is presented. It uses a microlens array coupled to the image port of a standard microscope to obtain tomographic data by a filtered back-projection algorithm. Scanning of the microlens array is proposed to obtain a dense data set for reconstruction. Simulation and experimental results are shown and the implications of this approach in fast 3D imaging are discussed.

  12. Investigation of Image Reconstruction Parameters of the Mediso nanoScan PC Small-Animal PET/CT Scanner for Two Different Positron Emitters Under NEMA NU 4-2008 Standards.

    PubMed

    Gaitanis, Anastasios; Kastis, George A; Vlastou, Elena; Bouziotis, Penelope; Verginis, Panayotis; Anagnostopoulos, Constantinos D

    2017-08-01

    The Tera-Tomo 3D image reconstruction algorithm (a version of OSEM), provided with the Mediso nanoScan® PC (PET8/2) small-animal positron emission tomograph (PET)/x-ray computed tomography (CT) scanner, has various parameter options such as total level of regularization, subsets, and iterations. Also, the acquisition time in PET plays an important role. This study aims to assess the performance of this new small-animal PET/CT scanner for different acquisition times and reconstruction parameters, for 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and Ga-68, under the NEMA NU 4-2008 standards. Various image quality metrics were calculated for different realizations of [ 18 F]FDG and Ga-68 filled image quality (IQ) phantoms. [ 18 F]FDG imaging produced improved images over Ga-68. The best compromise for the optimization of all image quality factors is achieved for at least 30 min acquisition and image reconstruction with 52 iteration updates combined with a high regularization level. A high regularization level at 52 iteration updates and 30 min acquisition time were found to optimize most of the figures of merit investigated.

  13. Non-null annular subaperture stitching interferometry for aspheric test

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A non-null annular subaperture stitching interferometry (NASSI), combining the subaperture stitching idea and non-null test method, is proposed for steep aspheric testing. Compared with standard annular subaperture stitching interferometry (ASSI), a partial null lens (PNL) is employed as an alternative to the transmission sphere, to generate different aspherical wavefronts as the references. The coverage subaperture number would thus be reduced greatly for the better performance of aspherical wavefronts in matching the local slope of aspheric surfaces. Instead of various mathematical stitching algorithms, a simultaneous reverse optimizing reconstruction (SROR) method based on system modeling and ray tracing is proposed for full aperture figure error reconstruction. All the subaperture measurements are simulated simultaneously with a multi-configuration model in a ray-tracing program, including the interferometric system modeling and subaperture misalignments modeling. With the multi-configuration model, full aperture figure error would be extracted in form of Zernike polynomials from subapertures wavefront data by the SROR method. This method concurrently accomplishes subaperture retrace error and misalignment correction, requiring neither complex mathematical algorithms nor subaperture overlaps. A numerical simulation exhibits the comparison of the performance of the NASSI and standard ASSI, which demonstrates the high accuracy of the NASSI in testing steep aspheric. Experimental results of NASSI are shown to be in good agreement with that of Zygo® VerifireTM Asphere interferometer.

  14. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  15. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  16. Anisotropic conductivity imaging with MREIT using equipotential projection algorithm.

    PubMed

    Değirmenci, Evren; Eyüboğlu, B Murat

    2007-12-21

    Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.

  17. Comparisons of hybrid radiosity-diffusion model and diffusion equation for bioluminescence tomography in cavity cancer detection

    NASA Astrophysics Data System (ADS)

    Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie

    2012-06-01

    Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.

  18. Comparisons of hybrid radiosity-diffusion model and diffusion equation for bioluminescence tomography in cavity cancer detection.

    PubMed

    Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie

    2012-06-01

    Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.

  19. Three-Dimensional Weighting in Cone Beam FBP Reconstruction and Its Transformation Over Geometries.

    PubMed

    Tang, Shaojie; Huang, Kuidong; Cheng, Yunyong; Niu, Tianye; Tang, Xiangyang

    2018-06-01

    With substantially increased number of detector rows in multidetector CT (MDCT), axial scan with projection data acquired along a circular source trajectory has become the method-of-choice in increasing clinical applications. Recognizing the practical relevance of image reconstruction directly from the projection data acquired in the native cone beam (CB) geometry, especially in scenarios wherein the most achievable in-plane resolution is desirable, we present a three-dimensional (3-D) weighted CB-FBP algorithm in such geometry in this paper. We start the algorithm's derivation in the cone-parallel geometry. Via changing of variables, taking the Jacobian into account and making heuristic and empirical assumptions, we arrive at the formulas for 3-D weighted image reconstruction in the native CB geometry. Using the projection data simulated by computer and acquired by an MDCT scanner, we evaluate and verify performance of the proposed algorithm for image reconstruction directly from projection data acquired in the native CB geometry. The preliminary data show that the proposed algorithm performs as well as the 3-D weighted CB-FBP algorithm in the cone-parallel geometry. The proposed algorithm is anticipated to find its utility in extensive clinical and preclinical applications wherein the reconstruction of images in the native CB geometry, i.e., the geometry for data acquisition, is of relevance.

  20. Tomographic reconstruction of tracer gas concentration profiles in a room with the use of a single OP-FTIR and two iterative algorithms: ART and PWLS.

    PubMed

    Park, D Y; Fessler, J A; Yost, M G; Levine, S P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  1. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    PubMed

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  2. Jini service to reconstruct tomographic data

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.

    2002-06-01

    A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.

  3. Image reconstruction from few-view CT data by gradient-domain dictionary learning.

    PubMed

    Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong

    2016-05-21

    Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.

  4. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  5. Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.

    PubMed

    Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A

    2007-01-01

    CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.

  6. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  7. Inverse scattering and refraction corrected reflection for breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John

    2010-03-01

    Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.

  8. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  9. A biomechanical modeling guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2017-03-01

    Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.

  10. A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan

    2008-03-01

    The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.

  11. Statistical reconstruction for cosmic ray muon tomography.

    PubMed

    Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J

    2007-08-01

    Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.

  12. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  13. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  14. Characterization and optimization of image quality as a function of reconstruction algorithms and parameter settings in a Siemens Inveon small-animal PET scanner using the NEMA NU 4-2008 standards

    NASA Astrophysics Data System (ADS)

    Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.

    2011-02-01

    The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.

  15. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  16. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    NASA Astrophysics Data System (ADS)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  17. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  18. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  19. Characterization of a CT unit for the detection of low contrast structures

    NASA Astrophysics Data System (ADS)

    Viry, Anais; Racine, Damien; Ba, Alexandre; Becce, Fabio; Bochud, François O.; Verdun, Francis R.

    2017-03-01

    Major technological advances in CT enable the acquisition of high quality images while minimizing patient exposure. The goal of this study was to objectively compare two generations of iterative reconstruction (IR) algorithms for the detection of low contrast structures. An abdominal phantom (QRM, Germany), containing 8, 6 and 5mm-diameter spheres (with a nominal contrast of 20HU) was scanned using our standard clinical noise index settings on a GE CT: "Discovery 750 HD". Two additional rings (2.5 and 5 cm) were also added to the phantom. Images were reconstructed using FBP, ASIR-50%, and VEO (full statistical Model Based Iterative Reconstruction, MBIR). The reconstructed slice thickness was 2.5 mm except 0.625 mm for VEO reconstructions. NPS was calculated to highlight the potential noise reduction of each IR algorithm. To assess LCD (low Contrast Detectability), a Channelized Hotelling Observer (CHO) with 10 DDoG channels was used with the area under the curve (AUC) as a figure of merit. Spheres contrast was also measured. ASIR-50% allowed a noise reduction by a factor two when compared to FBP without an improvement of the LCD. VEO allowed an additional noise reduction with a thinner slice thickness compared to ASIR-50% but with a major improvement of the LCD especially for the large-sized phantom and small lesions. Contrast decreased up to 10% with the phantom size increase for FBP and ASIR-50% and remained constant with VEO. VEO is particularly interesting for LCD when dealing with large patients and small lesion sizes and when the detection task is difficult.

  20. Design of tree structured matched wavelet for HRV signals of menstrual cycle.

    PubMed

    Rawal, Kirti; Saini, B S; Saini, Indu

    2016-07-01

    An algorithm is presented for designing a new class of wavelets matched to the Heart Rate Variability (HRV) signals of the menstrual cycle. The proposed wavelets are used to find HRV variations between phases of menstrual cycle. The method finds the signal matching characteristics by minimising the shape feature error using Least Mean Square method. The proposed filter banks are used for the decomposition of the HRV signal. For reconstructing the original signal, the tree structure method is used. In this approach, decomposed sub-bands are selected based upon their energy in each sub-band. Thus, instead of using all sub-bands for reconstruction, sub-bands having high energy content are used for the reconstruction of signal. Thus, a lower number of sub-bands are required for reconstruction of the original signal which shows the effectiveness of newly created filter coefficients. Results show that proposed wavelets are able to differentiate HRV variations between phases of the menstrual cycle accurately than standard wavelets.

  1. Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.

    PubMed

    Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart

    2010-01-01

    Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.

  2. Quantitative Features of Liver Lesions, Lung Nodules, and Renal Stones at Multi-Detector Row CT Examinations: Dependency on Radiation Dose and Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan

    2016-04-01

    To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.

  3. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  4. PSF reconstruction for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming

    2018-02-01

    Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.

  5. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  6. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    PubMed

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  8. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  9. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selectedmore » from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.« less

  10. CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining

    NASA Astrophysics Data System (ADS)

    Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei

    2015-10-01

    The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.

  11. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  12. Digital Audio Signal Processing and Nde: AN Unlikely but Valuable Partnership

    NASA Astrophysics Data System (ADS)

    Gaydecki, Patrick

    2008-02-01

    In the Digital Signal Processing (DSP) group, within the School of Electrical and Electronic Engineering at The University of Manchester, research is conducted into two seemingly distinct and disparate subjects: instrumentation for nondestructive evaluation, and DSP systems & algorithms for digital audio. We have often found that many of the hardware systems and algorithms employed to recover, extract or enhance audio signals may also be applied to signals provided by ultrasonic or magnetic NDE instruments. Furthermore, modern DSP hardware is so fast (typically performing hundreds of millions of operations per second), that much of the processing and signal reconstruction may be performed in real time. Here, we describe some of the hardware systems we have developed, together with algorithms that can be implemented both in real time and offline. A next generation system has now been designed, which incorporates a processor operating at 0.55 Giga MMACS, six input and eight output analogue channels, digital input/output in the form of S/PDIF, a JTAG and a USB interface. The software allows the user, with no knowledge of filter theory or programming, to design and run standard or arbitrary FIR, IIR and adaptive filters. Using audio as a vehicle, we can demonstrate the remarkable properties of modern reconstruction algorithms when used in conjunction with such hardware; applications in NDE include signal enhancement and recovery in acoustic, ultrasonic, magnetic and eddy current modalities.

  13. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  14. The properties of SIRT, TVM, and DART for 3D imaging of tubular domains in nanocomposite thin-films and sections.

    PubMed

    Chen, Delei; Goris, Bart; Bleichrodt, Folkert; Mezerji, Hamed Heidari; Bals, Sara; Batenburg, Kees Joost; de With, Gijsbertus; Friedrich, Heiner

    2014-12-01

    In electron tomography, the fidelity of the 3D reconstruction strongly depends on the employed reconstruction algorithm. In this paper, the properties of SIRT, TVM and DART reconstructions are studied with respect to having only a limited number of electrons available for imaging and applying different angular sampling schemes. A well-defined realistic model is generated, which consists of tubular domains within a matrix having slab-geometry. Subsequently, the electron tomography workflow is simulated from calculated tilt-series over experimental effects to reconstruction. In comparison with the model, the fidelity of each reconstruction method is evaluated qualitatively and quantitatively based on global and local edge profiles and resolvable distance between particles. Results show that the performance of all reconstruction methods declines with the total electron dose. Overall, SIRT algorithm is the most stable method and insensitive to changes in angular sampling. TVM algorithm yields significantly sharper edges in the reconstruction, but the edge positions are strongly influenced by the tilt scheme and the tubular objects become thinned. The DART algorithm markedly suppresses the elongation artifacts along the beam direction and moreover segments the reconstruction which can be considered a significant advantage for quantification. Finally, no advantage of TVM and DART to deal better with fewer projections was observed. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source

    PubMed Central

    Atwood, Robert C.; Bodey, Andrew J.; Price, Stephen W. T.; Basham, Mark; Drakopoulos, Michael

    2015-01-01

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution. PMID:25939626

  16. Image reconstruction and system modeling techniques for virtual-pinhole PET insert systems

    PubMed Central

    Keesing, Daniel B; Mathews, Aswin; Komarov, Sergey; Wu, Heyu; Song, Tae Yong; O'Sullivan, Joseph A; Tai, Yuan-Chuan

    2012-01-01

    Virtual-pinhole PET (VP-PET) imaging is a new technology in which one or more high-resolution detector modules are integrated into a conventional PET scanner with lower-resolution detectors. It can locally enhance the spatial resolution and contrast recovery near the add-on detectors, and depending on the configuration, may also increase the sensitivity of the system. This novel scanner geometry makes the reconstruction problem more challenging compared to the reconstruction of data from a standalone PET scanner, as new techniques are needed to model and account for the non-standard acquisition. In this paper, we present a general framework for fully 3D modeling of an arbitrary VP-PET insert system. The model components are incorporated into a statistical reconstruction algorithm to estimate an image from the multi-resolution data. For validation, we apply the proposed model and reconstruction approach to one of our custom-built VP-PET systems – a half-ring insert device integrated into a clinical PET/CT scanner. Details regarding the most important implementation issues are provided. We show that the proposed data model is consistent with the measured data, and that our approach can lead to reconstructions with improved spatial resolution and lesion detectability. PMID:22490983

  17. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  18. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  19. Characterizing isolated attosecond pulses with angular streaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.

    Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  20. Characterizing isolated attosecond pulses with angular streaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.

    We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  1. Characterizing isolated attosecond pulses with angular streaking

    DOE PAGES

    Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.; ...

    2018-02-12

    Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  2. Characterizing isolated attosecond pulses with angular streaking

    DOE PAGES

    Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.; ...

    2018-02-13

    We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  3. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  4. Accelerating Advanced MRI Reconstructions on GPUs

    PubMed Central

    Stone, S.S.; Haldar, J.P.; Tsao, S.C.; Hwu, W.-m.W.; Sutton, B.P.; Liang, Z.-P.

    2008-01-01

    Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA’s Quadro FX 5600. The reconstruction of a 3D image with 1283 voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%. PMID:21796230

  5. Accelerating Advanced MRI Reconstructions on GPUs.

    PubMed

    Stone, S S; Haldar, J P; Tsao, S C; Hwu, W-M W; Sutton, B P; Liang, Z-P

    2008-10-01

    Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA's Quadro FX 5600. The reconstruction of a 3D image with 128(3) voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%.

  6. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  7. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  8. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.

  9. Blooming Artifact Reduction in Coronary Artery Calcification by A New De-blooming Algorithm: Initial Study.

    PubMed

    Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A

    2018-05-02

    The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.

  10. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  11. Extending Three-Dimensional Weighted Cone Beam Filtered Backprojection (CB-FBP) Algorithm for Image Reconstruction in Volumetric CT at Low Helical Pitches

    PubMed Central

    Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.

    2006-01-01

    A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031

  12. Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

    NASA Astrophysics Data System (ADS)

    Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.

    Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.

  13. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metricmore » was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more smoothed appearance than the pre-ASiR Trade-Mark-Sign 100% FBP image. Finally, relative to non-ASiR Trade-Mark-Sign images with 100% of standard dose across the pediatric phantom age spectrum, similar noise levels were obtained in the images at a dose reduction of 48% with 40% ASIR Trade-Mark-Sign and a dose reduction of 82% with 100% ASIR Trade-Mark-Sign . Conclusions: The authors' work was conducted to identify the dose reduction limits of ASiR Trade-Mark-Sign for a pediatric oncology population using automatic tube current modulation. Improvements in noise levels from ASiR Trade-Mark-Sign reconstruction were adapted to provide lower radiation exposure (i.e., lower mA) instead of improved image quality. We have demonstrated for the image quality standards required at our institution, a maximum dose reduction of 82% can be achieved using 100% ASiR Trade-Mark-Sign ; however, to negate changes in the appearance of reconstructed images using ASiR Trade-Mark-Sign with a medium to low frequency noise preserving reconstruction filter (i.e., standard), 40% ASiR Trade-Mark-Sign was implemented in our clinic for 42%-48% dose reduction at all pediatric ages without a visually perceptible change in image quality or image noise.« less

  14. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naseri, M; Rajabi, H; Wang, J

    Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less

  16. A smoothed two- and three-dimensional interface reconstruction method

    DOE PAGES

    Mosso, Stewart; Garasi, Christopher; Drake, Richard

    2008-04-22

    The Patterned Interface Reconstruction algorithm reduces the discontinuity between material interfaces in neighboring computational elements. This smoothing improves the accuracy of the reconstruction for smooth bodies. The method can be used in two- and three-dimensional Cartesian and unstructured meshes. Planar interfaces will be returned for planar volume fraction distributions. Finally, the algorithm is second-order accurate for smooth volume fraction distributions.

  17. BPF-type region-of-interest reconstruction for parallel translational computed tomography.

    PubMed

    Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin

    2017-01-01

    The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.

  18. Measuring the performance of super-resolution reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet

    2012-06-01

    For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.

  19. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  20. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  1. Spectral reconstruction of signals from periodic nonuniform subsampling based on a Nyquist folding scheme

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Zhu, Jun; Tang, Bin

    2017-12-01

    Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.

  2. Very low-dose adult whole-body tumor imaging with F-18 FDG PET/CT

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Naveed, Muhammad; McGrath, Mary; Lisi, Michele; Lavalley, Cathy; Feiglin, David

    2015-03-01

    The aim of this study was to evaluate if effective radiation dose due to PET component in adult whole-body tumor imaging with time-of-flight F-18 FDG PET/CT could be significantly reduced. We retrospectively analyzed data for 10 patients with the body mass index ranging from 25 to 50. We simulated F-18 FDG dose reduction to 25% of the ACR recommended dose via reconstruction of simulated shorter acquisition time per bed position scans from the acquired list data. F-18 FDG whole-body scans were reconstructed using time-of-flight OSEM algorithm and advanced system modeling. Two groups of images were obtained: group A with a standard dose of F-18 FDG and standard reconstruction parameters and group B with simulated 25% dose and modified reconstruction parameters, respectively. Three nuclear medicine physicians blinded to the simulated activity independently reviewed the images and compared diagnostic quality of images. Based on the input from the physicians, we selected optimal modified reconstruction parameters for group B. In so obtained images, all the lesions observed in the group A were visible in the group B. The tumor SUV values were different in the group A, as compared to group B, respectively. However, no significant differences were reported in the final interpretation of the images from A and B groups. In conclusion, for a small number of patients, we have demonstrated that F-18 FDG dose reduction to 25% of the ACR recommended dose, accompanied by appropriate modification of the reconstruction parameters provided adequate diagnostic quality of PET images acquired on time-of-flight PET/CT.

  3. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  4. Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT

    DTIC Science & Technology

    2010-02-01

    source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak

  5. Optimization of SPECT-CT Hybrid Imaging Using Iterative Image Reconstruction for Low-Dose CT: A Phantom Study

    PubMed Central

    Grosser, Oliver S.; Kupitz, Dennis; Ruf, Juri; Czuczwara, Damian; Steffen, Ingo G.; Furth, Christian; Thormann, Markus; Loewenthal, David; Ricke, Jens; Amthauer, Holger

    2015-01-01

    Background Hybrid imaging combines nuclear medicine imaging such as single photon emission computed tomography (SPECT) or positron emission tomography (PET) with computed tomography (CT). Through this hybrid design, scanned patients accumulate radiation exposure from both applications. Imaging modalities have been the subject of long-term optimization efforts, focusing on diagnostic applications. It was the aim of this study to investigate the influence of an iterative CT image reconstruction algorithm (ASIR) on the image quality of the low-dose CT images. Methodology/Principal Findings Examinations were performed with a SPECT-CT scanner with standardized CT and SPECT-phantom geometries and CT protocols with systematically reduced X-ray tube currents. Analyses included image quality with respect to photon flux. Results were compared to the standard FBP reconstructed images. The general impact of the CT-based attenuation maps used during SPECT reconstruction was examined for two SPECT phantoms. Using ASIR for image reconstructions, image noise was reduced compared to FBP reconstructions for the same X-ray tube current. The Hounsfield unit (HU) values reconstructed by ASIR were correlated to the FBP HU values(R2 ≥ 0.88) and the contrast-to-noise ratio (CNR) was improved by ASIR. However, for a phantom with increased attenuation, the HU values shifted for low X-ray tube currents I ≤ 60 mA (p ≤ 0.04). In addition, the shift of the HU values was observed within the attenuation corrected SPECT images for very low X-ray tube currents (I ≤ 20 mA, p ≤ 0.001). Conclusion/Significance In general, the decrease in X-ray tube current up to 30 mA in combination with ASIR led to a reduction of CT-related radiation exposure without a significant decrease in image quality. PMID:26390216

  6. Data reconstruction can improve abundance index estimation: An example using Taiwanese longline data for Pacific bluefin tuna

    PubMed Central

    Fukuda, Hiromu; Maunder, Mark N.

    2017-01-01

    Catch-per-unit-effort (CPUE) is often the main piece of information used in fisheries stock assessment; however, the catch and effort data that are traditionally compiled from commercial logbooks can be incomplete or unreliable due to many reasons. Pacific bluefin tuna (PBF) is a seasonal target species in the Taiwanese longline fishery. Since 2010, detailed catch information for each PBF has been made available through a catch documentation scheme. However, previously, only market landing data with a low coverage of logbooks were available. Therefore, several nontraditional procedures were performed to reconstruct catch and effort data from many alternative data sources not directly obtained from fishers for 2001–2015: (1) Estimating the catch number from the landing weight for 2001–2003, for which the catch number information was incomplete, based on Monte Carlo simulation; (2) deriving fishing days for 2007–2009 from voyage data recorder data, based on a newly developed algorithm; and (3) deriving fishing days for 2001–2006 from vessel trip information, based on linear relationships between fishing and at-sea days. Subsequently, generalized linear mixed models were developed with the delta-lognormal assumption for standardizing the CPUE calculated from the reconstructed data, and three-stage model evaluation was performed using (1) Akaike and Bayesian information criteria to determine the most favorable variable composition of standardization models, (2) overall R2 via cross-validation to compare fitting performance between area-separated and area-combined standardizations, and (3) system-based testing to explore the consistency of the standardized CPUEs with auxiliary data in the PBF stock assessment model. The last stage of evaluation revealed high consistency among the data, thus demonstrating improvements in data reconstruction for estimating the abundance index, and consequently the stock assessment. PMID:28968434

  7. 3-Dimensional stereo implementation of photoacoustic imaging based on a new image reconstruction algorithm without using discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu

    2017-05-01

    In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.

  8. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    NASA Astrophysics Data System (ADS)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  9. Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm

    PubMed Central

    Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed

    2008-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581

  10. Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.

    PubMed

    Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed

    2004-01-01

    Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.

  11. Filament capturing with the multimaterial moment-of-fluid method*

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Shashkov, Mikhail

    2015-01-15

    A novel method for capturing two-dimensional, thin, under-resolved material configurations, known as “filaments,” is presented in the context of interface reconstruction. This technique uses a partitioning procedure to detect disconnected regions of material in the advective preimage of a cell (indicative of a filament) and makes use of the existing functionality of the Multimaterial Moment-of-Fluid interface reconstruction method to accurately capture the under-resolved feature, while exactly conserving volume. An algorithm for Adaptive Mesh Refinement in the presence of filaments is developed so that refinement is introduced only near the tips of filaments and where the Moment-of-Fluid reconstruction error is stillmore » large. Comparison to the standard Moment-of-Fluid method is made. As a result, it is demonstrated that using filament capturing at a given resolution yields gains in accuracy comparable to introducing an additional level of mesh refinement at significantly lower cost.« less

  12. Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging.

    PubMed

    Dallaire, Xavier; Thibault, Simon

    2017-04-01

    Plenoptic imaging has been used in the past decade mainly for 3D reconstruction or digital refocusing. It was also shown that this technology has potential for correcting monochromatic aberrations in a standard optical system. In this paper, we present an algorithm for reconstructing images using a projection technique while correcting defects present in it that can apply to chromatic aberrations and wide-angle optical systems. We show that the impact of noise on the reconstruction procedure is minimal. Trade-offs between the sampling of the optical system needed for characterization and image quality are presented. Examples are shown for aberrations in a classic optical system and for chromatic aberrations. The technique is also applied to a wide-angle full field of view of 140° (FFOV 140°) optical system. This technique could be used in order to further simplify or minimize optical systems.

  13. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less

  14. Cone-beam reconstruction for the two-circles-plus-one-line trajectory

    NASA Astrophysics Data System (ADS)

    Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming

    2012-05-01

    The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.

  15. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  16. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  17. Application of composite dictionary multi-atom matching in gear fault diagnosis.

    PubMed

    Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng

    2011-01-01

    The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.

  18. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  19. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  20. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  1. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    NASA Astrophysics Data System (ADS)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin-Osher-Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  2. An algorithm based on OmniView technology to reconstruct sagittal and coronal planes of the fetal brain from volume datasets acquired by three-dimensional ultrasound.

    PubMed

    Rizzo, G; Capponi, A; Pietrolucci, M E; Capece, A; Aiello, E; Mammarella, S; Arduini, D

    2011-08-01

    To describe a novel algorithm, based on the new display technology 'OmniView', developed to visualize diagnostic sagittal and coronal planes of the fetal brain from volumes obtained by three-dimensional (3D) ultrasonography. We developed an algorithm to image standard neurosonographic planes by drawing dissecting lines through the axial transventricular view of 3D volume datasets acquired transabdominally. The algorithm was tested on 106 normal fetuses at 18-24 weeks of gestation and the visualization rates of brain diagnostic planes were evaluated by two independent reviewers. The algorithm was also applied to nine cases with proven brain defects. The two reviewers, using the algorithm on normal fetuses, found satisfactory images with visualization rates ranging between 71.7% and 96.2% for sagittal planes and between 76.4% and 90.6% for coronal planes. The agreement rate between the two reviewers, as expressed by Cohen's kappa coefficient, was > 0.93 for sagittal planes and > 0.89 for coronal planes. All nine abnormal volumes were identified by a single observer from among a series including normal brains, and eight of these nine cases were diagnosed correctly. This novel algorithm can be used to visualize standard sagittal and coronal planes in the fetal brain. This approach may simplify the examination of the fetal brain and reduce dependency of success on operator skill. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  3. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  4. Diagnostic accuracy of 256-row multidetector CT coronary angiography with prospective ECG-gating combined with fourth-generation iterative reconstruction algorithm in the assessment of coronary artery bypass: evaluation of dose reduction and image quality.

    PubMed

    Ippolito, Davide; Fior, Davide; Franzesi, Cammillo Talei; Riva, Luca; Casiraghi, Alessandra; Sironi, Sandro

    2017-12-01

    Effective radiation dose in coronary CT angiography (CTCA) for coronary artery bypass graft (CABG) evaluation is remarkably high because of long scan lengths. Prospective electrocardiographic gating with iterative reconstruction can reduce effective radiation dose. To evaluate the diagnostic performance of low-kV CT angiography protocol with prospective ecg-gating technique and iterative reconstruction (IR) algorithm in follow-up of CABG patients compared with standard retrospective protocol. Seventy-four non-obese patients with known coronary disease treated with artery bypass grafting were prospectively enrolled. All the patients underwent 256 MDCT (Brilliance iCT, Philips) CTCA using low-dose protocol (100 kV; 800 mAs; rotation time: 0.275 s) combined with prospective ECG-triggering acquisition and fourth-generation IR technique (iDose 4 ; Philips); all the lengths of the bypass graft were included in the evaluation. A control group of 42 similar patients was evaluated with a standard retrospective ECG-gated CTCA (100 kV; 800 mAs).On both CT examinations, ROIs were placed to calculate standard deviation of pixel values and intra-vessel density. Diagnostic quality was also evaluated using a 4-point quality scale. Despite the statistically significant reduction of radiation dose evaluated with DLP (study group mean DLP: 274 mGy cm; control group mean DLP: 1224 mGy cm; P value < 0.001). No statistical differences were found between PGA group and RGH group regarding intra-vessel density absolute values and SNR. Qualitative analysis, evaluated by two radiologists in "double blind", did not reveal any significant difference in diagnostic quality of the two groups. The development of high-speed MDCT scans combined with modern IR allows an accurate evaluation of CABG with prospective ECG-gating protocols in a single breath hold, obtaining a significant reduction in radiation dose.

  5. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation.

    PubMed

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-07

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  6. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    NASA Astrophysics Data System (ADS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  7. Evaluation of corrective reconstruction methods using a 3D cardiac-torso phantom and bull's-eye plots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.

    The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation andmore » detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.« less

  8. Feasibility of 3D Reconstruction of Neural Morphology Using Expansion Microscopy and Barcode-Guided Agglomeration

    PubMed Central

    Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H.; Boyden, Edward S.

    2017-01-01

    We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies—expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction. PMID:29114215

  9. Feasibility of 3D Reconstruction of Neural Morphology Using Expansion Microscopy and Barcode-Guided Agglomeration.

    PubMed

    Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H; Boyden, Edward S

    2017-01-01

    We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies-expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction.

  10. Pan-sharpening via compressed superresolution reconstruction and multidictionary learning

    NASA Astrophysics Data System (ADS)

    Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang

    2018-01-01

    In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virador, Patrick R.G.

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems:more » (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data and (c) mash the in plane projections, i.e. 2D data, with the projection data from the first oblique angles, which are then used to reconstruct the preliminary image in the 3D Reprojection Projection algorithm. The author presents reconstructed images of point sources and extended sources in both 2D and 3D. The images show that the camera is anticipated to eliminate radial elongation and produce artifact free and essentially spatially isotropic images throughout the entire FOV. It has a resolution of 1.50 ± 0.75 mm FWHM near the center, 2.25 ±0.75 mm FWHM in the bulk of the FOV, and 3.00 ± 0.75 mm FWHM near the edge and corners of the FOV.« less

  12. MR image reconstruction via guided filter.

    PubMed

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  13. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).

  14. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    PubMed

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  15. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels

    NASA Astrophysics Data System (ADS)

    Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.

    2017-07-01

    Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T  =  K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.

  16. The role of advanced reconstruction algorithms in cardiac CT

    PubMed Central

    Halliburton, Sandra S.; Tanabe, Yuki; Partovi, Sasan

    2017-01-01

    Non-linear iterative reconstruction (IR) algorithms have been increasingly incorporated into clinical cardiac CT protocols at institutions around the world. Multiple IR algorithms are available commercially from various vendors. IR algorithms decrease image noise and are primarily used to enable lower radiation dose protocols. IR can also be used to improve image quality for imaging of obese patients, coronary atherosclerotic plaques, coronary stents, and myocardial perfusion. In this article, we will review the various applications of IR algorithms in cardiac imaging and evaluate how they have changed practice. PMID:29255694

  17. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  18. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  19. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  20. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

Top